I0102 19:07:18.794982 8 e2e.go:224] Starting e2e run "18528229-2d93-11ea-814c-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577992037 - Will randomize all specs Will run 201 of 2164 specs Jan 2 19:07:19.217: INFO: >>> kubeConfig: /root/.kube/config Jan 2 19:07:19.220: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 2 19:07:19.236: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 2 19:07:19.266: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 2 19:07:19.266: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 2 19:07:19.266: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 2 19:07:19.275: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 2 19:07:19.275: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 2 19:07:19.275: INFO: e2e test version: v1.13.12 Jan 2 19:07:19.276: INFO: kube-apiserver version: v1.13.8 [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:07:19.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Jan 2 19:07:19.411: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 2 19:07:19.442: INFO: Waiting up to 5m0s for pod "pod-192e85fd-2d93-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-n7b49" to be "success or failure" Jan 2 19:07:19.495: INFO: Pod "pod-192e85fd-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 52.478014ms Jan 2 19:07:21.747: INFO: Pod "pod-192e85fd-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304568086s Jan 2 19:07:23.764: INFO: Pod "pod-192e85fd-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321468993s Jan 2 19:07:25.872: INFO: Pod "pod-192e85fd-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430124063s Jan 2 19:07:27.908: INFO: Pod "pod-192e85fd-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.465768535s Jan 2 19:07:29.923: INFO: Pod "pod-192e85fd-2d93-11ea-814c-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.480604026s Jan 2 19:07:32.276: INFO: Pod "pod-192e85fd-2d93-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.833240216s STEP: Saw pod success Jan 2 19:07:32.276: INFO: Pod "pod-192e85fd-2d93-11ea-814c-0242ac110005" satisfied condition "success or failure" Jan 2 19:07:32.347: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-192e85fd-2d93-11ea-814c-0242ac110005 container test-container: STEP: delete the pod Jan 2 19:07:32.680: INFO: Waiting for pod pod-192e85fd-2d93-11ea-814c-0242ac110005 to disappear Jan 2 19:07:32.687: INFO: Pod pod-192e85fd-2d93-11ea-814c-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:07:32.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-n7b49" for this suite. Jan 2 19:07:38.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:07:38.931: INFO: namespace: e2e-tests-emptydir-n7b49, resource: bindings, ignored listing per whitelist Jan 2 19:07:39.056: INFO: namespace e2e-tests-emptydir-n7b49 deletion completed in 6.355631364s • [SLOW TEST:19.781 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:07:39.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 2 19:07:39.252: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 2 19:07:45.105: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 2 19:07:50.002: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 2 19:07:50.103: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-j6lls,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-j6lls/deployments/test-cleanup-deployment,UID:2b6cc71b-2d93-11ea-a994-fa163e34d433,ResourceVersion:16949837,Generation:1,CreationTimestamp:2020-01-02 19:07:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jan 2 19:07:50.185: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:07:50.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-j6lls" for this suite. Jan 2 19:07:58.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:07:58.379: INFO: namespace: e2e-tests-deployment-j6lls, resource: bindings, ignored listing per whitelist Jan 2 19:07:58.454: INFO: namespace e2e-tests-deployment-j6lls deletion completed in 8.249818192s • [SLOW TEST:19.397 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:07:58.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 2 19:07:59.516: INFO: namespace e2e-tests-kubectl-q876v Jan 2 19:07:59.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-q876v' Jan 2 19:08:01.931: INFO: stderr: "" Jan 2 19:08:01.931: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 2 19:08:02.949: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:08:02.950: INFO: Found 0 / 1 Jan 2 19:08:04.135: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:08:04.136: INFO: Found 0 / 1 Jan 2 19:08:04.953: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:08:04.953: INFO: Found 0 / 1 Jan 2 19:08:06.053: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:08:06.053: INFO: Found 0 / 1 Jan 2 19:08:08.028: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:08:08.028: INFO: Found 0 / 1 Jan 2 19:08:09.104: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:08:09.104: INFO: Found 0 / 1 Jan 2 19:08:09.949: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:08:09.950: INFO: Found 0 / 1 Jan 2 19:08:10.947: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:08:10.947: INFO: Found 0 / 1 Jan 2 19:08:11.970: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:08:11.970: INFO: Found 0 / 1 Jan 2 19:08:12.948: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:08:12.948: INFO: Found 1 / 1 Jan 2 19:08:12.948: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 2 19:08:12.954: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:08:12.954: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 2 19:08:12.954: INFO: wait on redis-master startup in e2e-tests-kubectl-q876v Jan 2 19:08:12.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gv29j redis-master --namespace=e2e-tests-kubectl-q876v' Jan 2 19:08:13.173: INFO: stderr: "" Jan 2 19:08:13.174: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 02 Jan 19:08:10.806 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jan 19:08:10.806 # Server started, Redis version 3.2.12\n1:M 02 Jan 19:08:10.807 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jan 19:08:10.807 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 2 19:08:13.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-q876v' Jan 2 19:08:13.425: INFO: stderr: "" Jan 2 19:08:13.425: INFO: stdout: "service/rm2 exposed\n" Jan 2 19:08:13.435: INFO: Service rm2 in namespace e2e-tests-kubectl-q876v found. STEP: exposing service Jan 2 19:08:15.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-q876v' Jan 2 19:08:15.734: INFO: stderr: "" Jan 2 19:08:15.734: INFO: stdout: "service/rm3 exposed\n" Jan 2 19:08:15.761: INFO: Service rm3 in namespace e2e-tests-kubectl-q876v found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:08:17.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q876v" for this suite. Jan 2 19:08:41.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:08:41.973: INFO: namespace: e2e-tests-kubectl-q876v, resource: bindings, ignored listing per whitelist Jan 2 19:08:42.282: INFO: namespace e2e-tests-kubectl-q876v deletion completed in 24.488117096s • [SLOW TEST:43.827 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:08:42.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-4aaec9a2-2d93-11ea-814c-0242ac110005 STEP: Creating a pod to test consume secrets Jan 2 19:08:42.580: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4ab23bdc-2d93-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-kz7wt" to be "success or failure" Jan 2 19:08:42.637: INFO: Pod "pod-projected-secrets-4ab23bdc-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.043387ms Jan 2 19:08:44.980: INFO: Pod "pod-projected-secrets-4ab23bdc-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399698515s Jan 2 19:08:47.005: INFO: Pod "pod-projected-secrets-4ab23bdc-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.424450054s Jan 2 19:08:49.685: INFO: Pod "pod-projected-secrets-4ab23bdc-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.10515656s Jan 2 19:08:51.710: INFO: Pod "pod-projected-secrets-4ab23bdc-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.130098128s Jan 2 19:08:53.732: INFO: Pod "pod-projected-secrets-4ab23bdc-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.151409185s Jan 2 19:08:55.748: INFO: Pod "pod-projected-secrets-4ab23bdc-2d93-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.167845804s STEP: Saw pod success Jan 2 19:08:55.748: INFO: Pod "pod-projected-secrets-4ab23bdc-2d93-11ea-814c-0242ac110005" satisfied condition "success or failure" Jan 2 19:08:55.755: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-4ab23bdc-2d93-11ea-814c-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 2 19:08:56.552: INFO: Waiting for pod pod-projected-secrets-4ab23bdc-2d93-11ea-814c-0242ac110005 to disappear Jan 2 19:08:56.599: INFO: Pod pod-projected-secrets-4ab23bdc-2d93-11ea-814c-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:08:56.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kz7wt" for this suite. Jan 2 19:09:02.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:09:02.759: INFO: namespace: e2e-tests-projected-kz7wt, resource: bindings, ignored listing per whitelist Jan 2 19:09:02.923: INFO: namespace e2e-tests-projected-kz7wt deletion completed in 6.309309512s • [SLOW TEST:20.641 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:09:02.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 2 19:09:03.173: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5704b310-2d93-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-kdbnr" to be "success or failure" Jan 2 19:09:03.196: INFO: Pod "downwardapi-volume-5704b310-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.01752ms Jan 2 19:09:05.211: INFO: Pod "downwardapi-volume-5704b310-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037769322s Jan 2 19:09:07.235: INFO: Pod "downwardapi-volume-5704b310-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061238074s Jan 2 19:09:09.884: INFO: Pod "downwardapi-volume-5704b310-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.711085005s Jan 2 19:09:11.919: INFO: Pod "downwardapi-volume-5704b310-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.74550452s Jan 2 19:09:13.934: INFO: Pod "downwardapi-volume-5704b310-2d93-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.760308544s Jan 2 19:09:15.954: INFO: Pod "downwardapi-volume-5704b310-2d93-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.780688375s STEP: Saw pod success Jan 2 19:09:15.954: INFO: Pod "downwardapi-volume-5704b310-2d93-11ea-814c-0242ac110005" satisfied condition "success or failure" Jan 2 19:09:15.961: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5704b310-2d93-11ea-814c-0242ac110005 container client-container: STEP: delete the pod Jan 2 19:09:16.450: INFO: Waiting for pod downwardapi-volume-5704b310-2d93-11ea-814c-0242ac110005 to disappear Jan 2 19:09:16.496: INFO: Pod downwardapi-volume-5704b310-2d93-11ea-814c-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:09:16.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kdbnr" for this suite. Jan 2 19:09:22.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:09:22.706: INFO: namespace: e2e-tests-downward-api-kdbnr, resource: bindings, ignored listing per whitelist Jan 2 19:09:22.781: INFO: namespace e2e-tests-downward-api-kdbnr deletion completed in 6.256873889s • [SLOW TEST:19.857 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:09:22.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-b2zl STEP: Creating a pod to test atomic-volume-subpath Jan 2 19:09:23.000: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-b2zl" in namespace "e2e-tests-subpath-tj89s" to be "success or failure" Jan 2 19:09:23.086: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Pending", Reason="", readiness=false. Elapsed: 85.620814ms Jan 2 19:09:25.121: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12074182s Jan 2 19:09:27.144: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144217004s Jan 2 19:09:30.084: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Pending", Reason="", readiness=false. Elapsed: 7.084153238s Jan 2 19:09:32.108: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Pending", Reason="", readiness=false. Elapsed: 9.107499881s Jan 2 19:09:34.123: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Pending", Reason="", readiness=false. Elapsed: 11.122962651s Jan 2 19:09:36.184: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Pending", Reason="", readiness=false. Elapsed: 13.183599002s Jan 2 19:09:38.201: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Pending", Reason="", readiness=false. Elapsed: 15.200668874s Jan 2 19:09:40.213: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Pending", Reason="", readiness=false. Elapsed: 17.213002378s Jan 2 19:09:42.225: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Running", Reason="", readiness=false. Elapsed: 19.22503384s Jan 2 19:09:44.237: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Running", Reason="", readiness=false. Elapsed: 21.23719791s Jan 2 19:09:46.256: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Running", Reason="", readiness=false. Elapsed: 23.255623311s Jan 2 19:09:48.274: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Running", Reason="", readiness=false. Elapsed: 25.273728027s Jan 2 19:09:50.305: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Running", Reason="", readiness=false. Elapsed: 27.304688785s Jan 2 19:09:52.319: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Running", Reason="", readiness=false. Elapsed: 29.318759562s Jan 2 19:09:54.408: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Running", Reason="", readiness=false. Elapsed: 31.407616804s Jan 2 19:09:56.424: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Running", Reason="", readiness=false. Elapsed: 33.423589698s Jan 2 19:09:58.449: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Running", Reason="", readiness=false. Elapsed: 35.448745912s Jan 2 19:10:00.585: INFO: Pod "pod-subpath-test-secret-b2zl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.585076837s STEP: Saw pod success Jan 2 19:10:00.585: INFO: Pod "pod-subpath-test-secret-b2zl" satisfied condition "success or failure" Jan 2 19:10:00.594: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-b2zl container test-container-subpath-secret-b2zl: STEP: delete the pod Jan 2 19:10:00.917: INFO: Waiting for pod pod-subpath-test-secret-b2zl to disappear Jan 2 19:10:00.923: INFO: Pod pod-subpath-test-secret-b2zl no longer exists STEP: Deleting pod pod-subpath-test-secret-b2zl Jan 2 19:10:00.923: INFO: Deleting pod "pod-subpath-test-secret-b2zl" in namespace "e2e-tests-subpath-tj89s" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:10:00.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-tj89s" for this suite. Jan 2 19:10:07.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:10:07.046: INFO: namespace: e2e-tests-subpath-tj89s, resource: bindings, ignored listing per whitelist Jan 2 19:10:07.154: INFO: namespace e2e-tests-subpath-tj89s deletion completed in 6.218172879s • [SLOW TEST:44.372 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:10:07.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 2 19:13:11.066: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:11.124: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:13.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:13.155: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:15.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:15.139: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:17.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:17.141: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:19.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:19.146: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:21.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:21.152: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:23.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:23.144: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:25.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:25.168: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:27.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:27.154: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:29.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:29.147: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:31.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:31.149: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:33.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:33.145: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:35.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:35.145: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:37.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:37.149: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:39.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:39.140: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:41.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:41.402: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:43.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:43.186: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:45.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:45.166: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:47.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:47.140: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:49.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:49.146: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:51.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:51.145: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:53.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:53.150: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:55.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:55.148: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:57.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:57.140: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:13:59.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:13:59.146: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:01.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:01.143: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:03.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:03.147: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:05.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:05.143: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:07.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:07.142: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:09.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:09.144: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:11.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:11.149: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:13.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:13.144: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:15.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:15.143: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:17.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:17.160: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:19.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:19.690: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:21.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:21.145: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:23.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:23.145: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:25.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:25.153: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:27.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:27.168: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:29.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:29.142: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:31.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:31.144: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:33.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:33.147: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:35.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:35.145: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:37.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:37.160: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:39.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:39.144: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:41.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:41.141: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:43.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:43.170: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:45.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:45.142: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:47.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:47.149: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:49.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:49.202: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:51.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:51.163: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:53.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:53.142: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:55.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:55.151: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:57.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:57.470: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:14:59.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:14:59.683: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:15:01.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:15:01.162: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:15:03.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:15:03.144: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:15:05.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:15:05.146: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:15:07.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:15:07.172: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:15:09.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:15:09.147: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:15:11.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:15:11.189: INFO: Pod pod-with-poststart-exec-hook still exists Jan 2 19:15:13.125: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 2 19:15:13.243: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:15:13.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zqxs4" for this suite. Jan 2 19:15:39.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:15:39.395: INFO: namespace: e2e-tests-container-lifecycle-hook-zqxs4, resource: bindings, ignored listing per whitelist Jan 2 19:15:39.510: INFO: namespace e2e-tests-container-lifecycle-hook-zqxs4 deletion completed in 26.256076306s • [SLOW TEST:332.356 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:15:39.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-582k STEP: Creating a pod to test atomic-volume-subpath Jan 2 19:15:39.914: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-582k" in namespace "e2e-tests-subpath-lpzp9" to be "success or failure" Jan 2 19:15:39.929: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Pending", Reason="", readiness=false. Elapsed: 15.377343ms Jan 2 19:15:42.149: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235299857s Jan 2 19:15:44.164: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250518095s Jan 2 19:15:46.602: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.688591964s Jan 2 19:15:48.621: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.707369823s Jan 2 19:15:51.127: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Pending", Reason="", readiness=false. Elapsed: 11.213556909s Jan 2 19:15:53.162: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Pending", Reason="", readiness=false. Elapsed: 13.248833169s Jan 2 19:15:55.187: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Pending", Reason="", readiness=false. Elapsed: 15.27305856s Jan 2 19:15:57.215: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Pending", Reason="", readiness=false. Elapsed: 17.300973001s Jan 2 19:15:59.231: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Running", Reason="", readiness=false. Elapsed: 19.317408919s Jan 2 19:16:01.248: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Running", Reason="", readiness=false. Elapsed: 21.33457311s Jan 2 19:16:03.265: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Running", Reason="", readiness=false. Elapsed: 23.351089095s Jan 2 19:16:05.281: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Running", Reason="", readiness=false. Elapsed: 25.367394898s Jan 2 19:16:07.295: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Running", Reason="", readiness=false. Elapsed: 27.381510574s Jan 2 19:16:09.310: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Running", Reason="", readiness=false. Elapsed: 29.396565202s Jan 2 19:16:11.325: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Running", Reason="", readiness=false. Elapsed: 31.411811741s Jan 2 19:16:13.366: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Running", Reason="", readiness=false. Elapsed: 33.452620229s Jan 2 19:16:15.380: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Running", Reason="", readiness=false. Elapsed: 35.466397825s Jan 2 19:16:17.680: INFO: Pod "pod-subpath-test-downwardapi-582k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.765876811s STEP: Saw pod success Jan 2 19:16:17.680: INFO: Pod "pod-subpath-test-downwardapi-582k" satisfied condition "success or failure" Jan 2 19:16:17.687: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-582k container test-container-subpath-downwardapi-582k: STEP: delete the pod Jan 2 19:16:18.281: INFO: Waiting for pod pod-subpath-test-downwardapi-582k to disappear Jan 2 19:16:18.305: INFO: Pod pod-subpath-test-downwardapi-582k no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-582k Jan 2 19:16:18.306: INFO: Deleting pod "pod-subpath-test-downwardapi-582k" in namespace "e2e-tests-subpath-lpzp9" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:16:18.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lpzp9" for this suite. Jan 2 19:16:26.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:16:26.651: INFO: namespace: e2e-tests-subpath-lpzp9, resource: bindings, ignored listing per whitelist Jan 2 19:16:26.664: INFO: namespace e2e-tests-subpath-lpzp9 deletion completed in 8.323408677s • [SLOW TEST:47.154 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:16:26.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 2 19:16:26.865: INFO: Waiting up to 5m0s for pod "downward-api-5f733e5b-2d94-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-ldj65" to be "success or failure" Jan 2 19:16:26.882: INFO: Pod "downward-api-5f733e5b-2d94-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.242275ms Jan 2 19:16:28.965: INFO: Pod "downward-api-5f733e5b-2d94-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099830969s Jan 2 19:16:30.984: INFO: Pod "downward-api-5f733e5b-2d94-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119602513s Jan 2 19:16:33.014: INFO: Pod "downward-api-5f733e5b-2d94-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148991999s Jan 2 19:16:35.045: INFO: Pod "downward-api-5f733e5b-2d94-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.180006585s Jan 2 19:16:37.080: INFO: Pod "downward-api-5f733e5b-2d94-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.21471098s STEP: Saw pod success Jan 2 19:16:37.080: INFO: Pod "downward-api-5f733e5b-2d94-11ea-814c-0242ac110005" satisfied condition "success or failure" Jan 2 19:16:37.085: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-5f733e5b-2d94-11ea-814c-0242ac110005 container dapi-container: STEP: delete the pod Jan 2 19:16:37.185: INFO: Waiting for pod downward-api-5f733e5b-2d94-11ea-814c-0242ac110005 to disappear Jan 2 19:16:38.283: INFO: Pod downward-api-5f733e5b-2d94-11ea-814c-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:16:38.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ldj65" for this suite. Jan 2 19:16:44.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:16:44.992: INFO: namespace: e2e-tests-downward-api-ldj65, resource: bindings, ignored listing per whitelist Jan 2 19:16:45.141: INFO: namespace e2e-tests-downward-api-ldj65 deletion completed in 6.836674154s • [SLOW TEST:18.477 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:16:45.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-cts2p A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-cts2p;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-cts2p A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-cts2p;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-cts2p.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-cts2p.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-cts2p.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-cts2p.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-cts2p.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-cts2p.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-cts2p.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cts2p.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-cts2p.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-cts2p.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-cts2p.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-cts2p.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-cts2p.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 29.52.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.52.29_udp@PTR;check="$$(dig +tcp +noall +answer +search 29.52.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.52.29_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-cts2p A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-cts2p;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-cts2p A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-cts2p;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-cts2p.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-cts2p.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-cts2p.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-cts2p.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-cts2p.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cts2p.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-cts2p.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cts2p.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-cts2p.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-cts2p.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-cts2p.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-cts2p.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-cts2p.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 29.52.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.52.29_udp@PTR;check="$$(dig +tcp +noall +answer +search 29.52.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.52.29_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 2 19:17:03.885: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-6a92aa34-2d94-11ea-814c-0242ac110005) Jan 2 19:17:04.000: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-6a92aa34-2d94-11ea-814c-0242ac110005) Jan 2 19:17:04.010: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cts2p from pod e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-6a92aa34-2d94-11ea-814c-0242ac110005) Jan 2 19:17:04.022: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cts2p from pod e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-6a92aa34-2d94-11ea-814c-0242ac110005) Jan 2 19:17:04.034: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-cts2p.svc from pod e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-6a92aa34-2d94-11ea-814c-0242ac110005) Jan 2 19:17:04.045: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-cts2p.svc from pod e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-6a92aa34-2d94-11ea-814c-0242ac110005) Jan 2 19:17:04.056: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cts2p.svc from pod e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-6a92aa34-2d94-11ea-814c-0242ac110005) Jan 2 19:17:04.065: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cts2p.svc from pod e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-6a92aa34-2d94-11ea-814c-0242ac110005) Jan 2 19:17:04.073: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-cts2p.svc from pod e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-6a92aa34-2d94-11ea-814c-0242ac110005) Jan 2 19:17:04.082: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-cts2p.svc from pod e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-6a92aa34-2d94-11ea-814c-0242ac110005) Jan 2 19:17:04.092: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-6a92aa34-2d94-11ea-814c-0242ac110005) Jan 2 19:17:04.099: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-6a92aa34-2d94-11ea-814c-0242ac110005) Jan 2 19:17:04.112: INFO: Lookups using e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-cts2p jessie_tcp@dns-test-service.e2e-tests-dns-cts2p jessie_udp@dns-test-service.e2e-tests-dns-cts2p.svc jessie_tcp@dns-test-service.e2e-tests-dns-cts2p.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-cts2p.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-cts2p.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-cts2p.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-cts2p.svc jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 2 19:17:09.327: INFO: DNS probes using e2e-tests-dns-cts2p/dns-test-6a92aa34-2d94-11ea-814c-0242ac110005 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:17:11.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-cts2p" for this suite. Jan 2 19:17:17.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:17:17.450: INFO: namespace: e2e-tests-dns-cts2p, resource: bindings, ignored listing per whitelist Jan 2 19:17:17.490: INFO: namespace e2e-tests-dns-cts2p deletion completed in 6.153069847s • [SLOW TEST:32.348 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:17:17.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jan 2 19:17:17.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 2 19:17:17.897: INFO: stderr: "" Jan 2 19:17:17.898: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:17:17.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5zfbl" for this suite. Jan 2 19:17:25.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:17:26.304: INFO: namespace: e2e-tests-kubectl-5zfbl, resource: bindings, ignored listing per whitelist Jan 2 19:17:26.331: INFO: namespace e2e-tests-kubectl-5zfbl deletion completed in 8.422004145s • [SLOW TEST:8.840 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:17:26.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0102 19:17:57.404195 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 2 19:17:57.404: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:17:57.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wrh9j" for this suite. Jan 2 19:18:07.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:18:08.654: INFO: namespace: e2e-tests-gc-wrh9j, resource: bindings, ignored listing per whitelist Jan 2 19:18:08.702: INFO: namespace e2e-tests-gc-wrh9j deletion completed in 11.285041022s • [SLOW TEST:42.371 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:18:08.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:18:20.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-dbrlg" for this suite. Jan 2 19:19:16.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:19:16.577: INFO: namespace: e2e-tests-kubelet-test-dbrlg, resource: bindings, ignored listing per whitelist Jan 2 19:19:16.613: INFO: namespace e2e-tests-kubelet-test-dbrlg deletion completed in 56.370475867s • [SLOW TEST:67.910 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:19:16.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 2 19:19:17.137: INFO: Number of nodes with available pods: 0 Jan 2 19:19:17.137: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:19.930: INFO: Number of nodes with available pods: 0 Jan 2 19:19:19.930: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:20.362: INFO: Number of nodes with available pods: 0 Jan 2 19:19:20.362: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:21.156: INFO: Number of nodes with available pods: 0 Jan 2 19:19:21.156: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:22.199: INFO: Number of nodes with available pods: 0 Jan 2 19:19:22.199: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:23.427: INFO: Number of nodes with available pods: 0 Jan 2 19:19:23.427: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:24.196: INFO: Number of nodes with available pods: 0 Jan 2 19:19:24.196: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:25.159: INFO: Number of nodes with available pods: 0 Jan 2 19:19:25.159: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:26.169: INFO: Number of nodes with available pods: 1 Jan 2 19:19:26.169: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 2 19:19:26.231: INFO: Number of nodes with available pods: 0 Jan 2 19:19:26.231: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:27.310: INFO: Number of nodes with available pods: 0 Jan 2 19:19:27.310: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:28.248: INFO: Number of nodes with available pods: 0 Jan 2 19:19:28.248: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:29.579: INFO: Number of nodes with available pods: 0 Jan 2 19:19:29.580: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:30.283: INFO: Number of nodes with available pods: 0 Jan 2 19:19:30.283: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:31.255: INFO: Number of nodes with available pods: 0 Jan 2 19:19:31.255: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:32.256: INFO: Number of nodes with available pods: 0 Jan 2 19:19:32.256: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:33.249: INFO: Number of nodes with available pods: 0 Jan 2 19:19:33.249: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:34.248: INFO: Number of nodes with available pods: 0 Jan 2 19:19:34.248: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:35.260: INFO: Number of nodes with available pods: 0 Jan 2 19:19:35.260: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:36.261: INFO: Number of nodes with available pods: 0 Jan 2 19:19:36.261: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:37.335: INFO: Number of nodes with available pods: 0 Jan 2 19:19:37.335: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:38.258: INFO: Number of nodes with available pods: 0 Jan 2 19:19:38.258: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:39.264: INFO: Number of nodes with available pods: 0 Jan 2 19:19:39.264: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:40.264: INFO: Number of nodes with available pods: 0 Jan 2 19:19:40.264: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:41.259: INFO: Number of nodes with available pods: 0 Jan 2 19:19:41.260: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:42.264: INFO: Number of nodes with available pods: 0 Jan 2 19:19:42.264: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:43.249: INFO: Number of nodes with available pods: 0 Jan 2 19:19:43.249: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:44.439: INFO: Number of nodes with available pods: 0 Jan 2 19:19:44.439: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:45.263: INFO: Number of nodes with available pods: 0 Jan 2 19:19:45.263: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:46.282: INFO: Number of nodes with available pods: 0 Jan 2 19:19:46.282: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:47.246: INFO: Number of nodes with available pods: 0 Jan 2 19:19:47.246: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:48.927: INFO: Number of nodes with available pods: 0 Jan 2 19:19:48.927: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:49.776: INFO: Number of nodes with available pods: 0 Jan 2 19:19:49.776: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:50.264: INFO: Number of nodes with available pods: 0 Jan 2 19:19:50.264: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:51.267: INFO: Number of nodes with available pods: 0 Jan 2 19:19:51.267: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 2 19:19:52.290: INFO: Number of nodes with available pods: 1 Jan 2 19:19:52.290: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6zfw6, will wait for the garbage collector to delete the pods Jan 2 19:19:52.407: INFO: Deleting DaemonSet.extensions daemon-set took: 14.187784ms Jan 2 19:19:52.507: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.509172ms Jan 2 19:20:02.767: INFO: Number of nodes with available pods: 0 Jan 2 19:20:02.767: INFO: Number of running nodes: 0, number of available pods: 0 Jan 2 19:20:02.777: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6zfw6/daemonsets","resourceVersion":"16951192"},"items":null} Jan 2 19:20:02.781: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6zfw6/pods","resourceVersion":"16951192"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:20:02.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6zfw6" for this suite. Jan 2 19:20:08.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:20:08.956: INFO: namespace: e2e-tests-daemonsets-6zfw6, resource: bindings, ignored listing per whitelist Jan 2 19:20:09.014: INFO: namespace e2e-tests-daemonsets-6zfw6 deletion completed in 6.205536524s • [SLOW TEST:52.400 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:20:09.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 2 19:20:09.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-77vwp' Jan 2 19:20:12.103: INFO: stderr: "" Jan 2 19:20:12.103: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 2 19:20:13.899: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:20:13.899: INFO: Found 0 / 1 Jan 2 19:20:14.287: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:20:14.287: INFO: Found 0 / 1 Jan 2 19:20:15.214: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:20:15.214: INFO: Found 0 / 1 Jan 2 19:20:16.130: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:20:16.130: INFO: Found 0 / 1 Jan 2 19:20:17.121: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:20:17.121: INFO: Found 0 / 1 Jan 2 19:20:18.368: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:20:18.368: INFO: Found 0 / 1 Jan 2 19:20:19.912: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:20:19.912: INFO: Found 0 / 1 Jan 2 19:20:20.145: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:20:20.145: INFO: Found 0 / 1 Jan 2 19:20:21.113: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:20:21.114: INFO: Found 0 / 1 Jan 2 19:20:22.121: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:20:22.121: INFO: Found 1 / 1 Jan 2 19:20:22.121: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 2 19:20:22.141: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:20:22.141: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 2 19:20:22.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-vqdrv --namespace=e2e-tests-kubectl-77vwp -p {"metadata":{"annotations":{"x":"y"}}}' Jan 2 19:20:22.376: INFO: stderr: "" Jan 2 19:20:22.377: INFO: stdout: "pod/redis-master-vqdrv patched\n" STEP: checking annotations Jan 2 19:20:22.446: INFO: Selector matched 1 pods for map[app:redis] Jan 2 19:20:22.446: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:20:22.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-77vwp" for this suite. Jan 2 19:20:44.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:20:44.654: INFO: namespace: e2e-tests-kubectl-77vwp, resource: bindings, ignored listing per whitelist Jan 2 19:20:44.689: INFO: namespace e2e-tests-kubectl-77vwp deletion completed in 22.201114867s • [SLOW TEST:35.675 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:20:44.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 2 19:20:44.985: INFO: Waiting up to 5m0s for pod "downward-api-f95464a9-2d94-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-tkdv6" to be "success or failure" Jan 2 19:20:45.003: INFO: Pod "downward-api-f95464a9-2d94-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.966039ms Jan 2 19:20:47.492: INFO: Pod "downward-api-f95464a9-2d94-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.506241884s Jan 2 19:20:49.517: INFO: Pod "downward-api-f95464a9-2d94-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.531782149s Jan 2 19:20:51.591: INFO: Pod "downward-api-f95464a9-2d94-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.605716112s Jan 2 19:20:53.626: INFO: Pod "downward-api-f95464a9-2d94-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.640227741s Jan 2 19:20:55.652: INFO: Pod "downward-api-f95464a9-2d94-11ea-814c-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.666932364s Jan 2 19:20:57.670: INFO: Pod "downward-api-f95464a9-2d94-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.684575513s STEP: Saw pod success Jan 2 19:20:57.670: INFO: Pod "downward-api-f95464a9-2d94-11ea-814c-0242ac110005" satisfied condition "success or failure" Jan 2 19:20:57.676: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f95464a9-2d94-11ea-814c-0242ac110005 container dapi-container: STEP: delete the pod Jan 2 19:20:58.059: INFO: Waiting for pod downward-api-f95464a9-2d94-11ea-814c-0242ac110005 to disappear Jan 2 19:20:58.086: INFO: Pod downward-api-f95464a9-2d94-11ea-814c-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:20:58.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tkdv6" for this suite. Jan 2 19:21:06.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:21:06.355: INFO: namespace: e2e-tests-downward-api-tkdv6, resource: bindings, ignored listing per whitelist Jan 2 19:21:06.367: INFO: namespace e2e-tests-downward-api-tkdv6 deletion completed in 8.271329518s • [SLOW TEST:21.678 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:21:06.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 2 19:21:06.649: INFO: Waiting up to 5m0s for pod "pod-063eac1e-2d95-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-l5n5z" to be "success or failure" Jan 2 19:21:06.673: INFO: Pod "pod-063eac1e-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.86276ms Jan 2 19:21:08.706: INFO: Pod "pod-063eac1e-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056922441s Jan 2 19:21:10.721: INFO: Pod "pod-063eac1e-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07174003s Jan 2 19:21:12.811: INFO: Pod "pod-063eac1e-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162040118s Jan 2 19:21:14.857: INFO: Pod "pod-063eac1e-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207345737s Jan 2 19:21:16.888: INFO: Pod "pod-063eac1e-2d95-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.238582516s STEP: Saw pod success Jan 2 19:21:16.888: INFO: Pod "pod-063eac1e-2d95-11ea-814c-0242ac110005" satisfied condition "success or failure" Jan 2 19:21:16.901: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-063eac1e-2d95-11ea-814c-0242ac110005 container test-container: STEP: delete the pod Jan 2 19:21:17.105: INFO: Waiting for pod pod-063eac1e-2d95-11ea-814c-0242ac110005 to disappear Jan 2 19:21:17.113: INFO: Pod pod-063eac1e-2d95-11ea-814c-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 19:21:17.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-l5n5z" for this suite. Jan 2 19:21:23.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 19:21:23.265: INFO: namespace: e2e-tests-emptydir-l5n5z, resource: bindings, ignored listing per whitelist Jan 2 19:21:23.340: INFO: namespace e2e-tests-emptydir-l5n5z deletion completed in 6.220439228s • [SLOW TEST:16.972 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 19:21:23.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 2 19:21:23.675: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.254442ms)
Jan  2 19:21:23.686: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.257959ms)
Jan  2 19:21:23.694: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.670547ms)
Jan  2 19:21:23.756: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 62.170865ms)
Jan  2 19:21:23.766: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.32282ms)
Jan  2 19:21:23.775: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.168873ms)
Jan  2 19:21:23.794: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.914147ms)
Jan  2 19:21:23.807: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.99594ms)
Jan  2 19:21:23.856: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 48.381545ms)
Jan  2 19:21:23.876: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.675517ms)
Jan  2 19:21:23.892: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.785845ms)
Jan  2 19:21:23.905: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.24639ms)
Jan  2 19:21:23.919: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.337769ms)
Jan  2 19:21:23.927: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.939538ms)
Jan  2 19:21:23.934: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.233426ms)
Jan  2 19:21:23.943: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.765367ms)
Jan  2 19:21:23.953: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.871422ms)
Jan  2 19:21:23.962: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.63183ms)
Jan  2 19:21:23.973: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.328163ms)
Jan  2 19:21:23.992: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.124796ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:21:23.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-79bcw" for this suite.
Jan  2 19:21:30.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:21:30.300: INFO: namespace: e2e-tests-proxy-79bcw, resource: bindings, ignored listing per whitelist
Jan  2 19:21:30.322: INFO: namespace e2e-tests-proxy-79bcw deletion completed in 6.309818934s

• [SLOW TEST:6.981 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:21:30.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-148191d5-2d95-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 19:21:30.640: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1484826d-2d95-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-9sjww" to be "success or failure"
Jan  2 19:21:30.660: INFO: Pod "pod-projected-secrets-1484826d-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.022531ms
Jan  2 19:21:32.782: INFO: Pod "pod-projected-secrets-1484826d-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14253366s
Jan  2 19:21:34.804: INFO: Pod "pod-projected-secrets-1484826d-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163931909s
Jan  2 19:21:37.443: INFO: Pod "pod-projected-secrets-1484826d-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.803067208s
Jan  2 19:21:39.942: INFO: Pod "pod-projected-secrets-1484826d-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.301796058s
Jan  2 19:21:41.958: INFO: Pod "pod-projected-secrets-1484826d-2d95-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.317705282s
STEP: Saw pod success
Jan  2 19:21:41.958: INFO: Pod "pod-projected-secrets-1484826d-2d95-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:21:41.964: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-1484826d-2d95-11ea-814c-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 19:21:42.603: INFO: Waiting for pod pod-projected-secrets-1484826d-2d95-11ea-814c-0242ac110005 to disappear
Jan  2 19:21:42.736: INFO: Pod pod-projected-secrets-1484826d-2d95-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:21:42.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9sjww" for this suite.
Jan  2 19:21:48.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:21:48.984: INFO: namespace: e2e-tests-projected-9sjww, resource: bindings, ignored listing per whitelist
Jan  2 19:21:49.056: INFO: namespace e2e-tests-projected-9sjww deletion completed in 6.290801926s

• [SLOW TEST:18.734 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:21:49.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  2 19:21:49.315: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-9wwvz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9wwvz/configmaps/e2e-watch-test-resource-version,UID:1fa71724-2d95-11ea-a994-fa163e34d433,ResourceVersion:16951453,Generation:0,CreationTimestamp:2020-01-02 19:21:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 19:21:49.315: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-9wwvz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9wwvz/configmaps/e2e-watch-test-resource-version,UID:1fa71724-2d95-11ea-a994-fa163e34d433,ResourceVersion:16951454,Generation:0,CreationTimestamp:2020-01-02 19:21:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:21:49.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-9wwvz" for this suite.
Jan  2 19:21:55.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:21:55.674: INFO: namespace: e2e-tests-watch-9wwvz, resource: bindings, ignored listing per whitelist
Jan  2 19:21:55.714: INFO: namespace e2e-tests-watch-9wwvz deletion completed in 6.387013167s

• [SLOW TEST:6.657 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:21:55.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  2 19:21:56.003: INFO: Waiting up to 5m0s for pod "pod-23a5c164-2d95-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-nl7fj" to be "success or failure"
Jan  2 19:21:56.011: INFO: Pod "pod-23a5c164-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.845877ms
Jan  2 19:21:58.029: INFO: Pod "pod-23a5c164-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026153669s
Jan  2 19:22:00.058: INFO: Pod "pod-23a5c164-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054835885s
Jan  2 19:22:02.621: INFO: Pod "pod-23a5c164-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.618169978s
Jan  2 19:22:04.668: INFO: Pod "pod-23a5c164-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.665292775s
Jan  2 19:22:06.706: INFO: Pod "pod-23a5c164-2d95-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.702879441s
STEP: Saw pod success
Jan  2 19:22:06.706: INFO: Pod "pod-23a5c164-2d95-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:22:06.720: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-23a5c164-2d95-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 19:22:07.436: INFO: Waiting for pod pod-23a5c164-2d95-11ea-814c-0242ac110005 to disappear
Jan  2 19:22:07.445: INFO: Pod pod-23a5c164-2d95-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:22:07.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nl7fj" for this suite.
Jan  2 19:22:13.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:22:13.886: INFO: namespace: e2e-tests-emptydir-nl7fj, resource: bindings, ignored listing per whitelist
Jan  2 19:22:13.926: INFO: namespace e2e-tests-emptydir-nl7fj deletion completed in 6.469939695s

• [SLOW TEST:18.212 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:22:13.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-2e97ef97-2d95-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 19:22:14.440: INFO: Waiting up to 5m0s for pod "pod-configmaps-2ea4ac27-2d95-11ea-814c-0242ac110005" in namespace "e2e-tests-configmap-fr2v6" to be "success or failure"
Jan  2 19:22:14.517: INFO: Pod "pod-configmaps-2ea4ac27-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 76.715139ms
Jan  2 19:22:16.868: INFO: Pod "pod-configmaps-2ea4ac27-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428184934s
Jan  2 19:22:18.902: INFO: Pod "pod-configmaps-2ea4ac27-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.462150838s
Jan  2 19:22:21.156: INFO: Pod "pod-configmaps-2ea4ac27-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.716402182s
Jan  2 19:22:23.173: INFO: Pod "pod-configmaps-2ea4ac27-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.733284979s
Jan  2 19:22:25.203: INFO: Pod "pod-configmaps-2ea4ac27-2d95-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.763226827s
STEP: Saw pod success
Jan  2 19:22:25.203: INFO: Pod "pod-configmaps-2ea4ac27-2d95-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:22:25.216: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2ea4ac27-2d95-11ea-814c-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 19:22:25.340: INFO: Waiting for pod pod-configmaps-2ea4ac27-2d95-11ea-814c-0242ac110005 to disappear
Jan  2 19:22:25.361: INFO: Pod pod-configmaps-2ea4ac27-2d95-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:22:25.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fr2v6" for this suite.
Jan  2 19:22:33.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:22:33.693: INFO: namespace: e2e-tests-configmap-fr2v6, resource: bindings, ignored listing per whitelist
Jan  2 19:22:33.795: INFO: namespace e2e-tests-configmap-fr2v6 deletion completed in 8.418380685s

• [SLOW TEST:19.867 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:22:33.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 19:22:34.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-9v2wj'
Jan  2 19:22:34.473: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 19:22:34.473: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan  2 19:22:38.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-9v2wj'
Jan  2 19:22:38.981: INFO: stderr: ""
Jan  2 19:22:38.981: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:22:38.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9v2wj" for this suite.
Jan  2 19:22:45.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:22:45.043: INFO: namespace: e2e-tests-kubectl-9v2wj, resource: bindings, ignored listing per whitelist
Jan  2 19:22:45.238: INFO: namespace e2e-tests-kubectl-9v2wj deletion completed in 6.247085814s

• [SLOW TEST:11.443 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:22:45.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  2 19:22:45.516: INFO: Waiting up to 5m0s for pod "pod-412cd63d-2d95-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-q5bvr" to be "success or failure"
Jan  2 19:22:45.523: INFO: Pod "pod-412cd63d-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436155ms
Jan  2 19:22:47.669: INFO: Pod "pod-412cd63d-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152364068s
Jan  2 19:22:49.685: INFO: Pod "pod-412cd63d-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168489905s
Jan  2 19:22:51.990: INFO: Pod "pod-412cd63d-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.473292121s
Jan  2 19:22:54.012: INFO: Pod "pod-412cd63d-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.495723567s
Jan  2 19:22:56.021: INFO: Pod "pod-412cd63d-2d95-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.505002717s
STEP: Saw pod success
Jan  2 19:22:56.021: INFO: Pod "pod-412cd63d-2d95-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:22:56.025: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-412cd63d-2d95-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 19:22:56.998: INFO: Waiting for pod pod-412cd63d-2d95-11ea-814c-0242ac110005 to disappear
Jan  2 19:22:57.227: INFO: Pod pod-412cd63d-2d95-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:22:57.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-q5bvr" for this suite.
Jan  2 19:23:03.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:23:03.472: INFO: namespace: e2e-tests-emptydir-q5bvr, resource: bindings, ignored listing per whitelist
Jan  2 19:23:03.655: INFO: namespace e2e-tests-emptydir-q5bvr deletion completed in 6.410233866s

• [SLOW TEST:18.415 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:23:03.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-ptjms
Jan  2 19:23:16.029: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-ptjms
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 19:23:16.045: INFO: Initial restart count of pod liveness-http is 0
Jan  2 19:23:36.839: INFO: Restart count of pod e2e-tests-container-probe-ptjms/liveness-http is now 1 (20.793301711s elapsed)
Jan  2 19:23:57.183: INFO: Restart count of pod e2e-tests-container-probe-ptjms/liveness-http is now 2 (41.138182913s elapsed)
Jan  2 19:24:17.613: INFO: Restart count of pod e2e-tests-container-probe-ptjms/liveness-http is now 3 (1m1.567682153s elapsed)
Jan  2 19:24:35.934: INFO: Restart count of pod e2e-tests-container-probe-ptjms/liveness-http is now 4 (1m19.888985534s elapsed)
Jan  2 19:25:38.019: INFO: Restart count of pod e2e-tests-container-probe-ptjms/liveness-http is now 5 (2m21.97405284s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:25:38.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ptjms" for this suite.
Jan  2 19:25:46.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:25:46.554: INFO: namespace: e2e-tests-container-probe-ptjms, resource: bindings, ignored listing per whitelist
Jan  2 19:25:46.564: INFO: namespace e2e-tests-container-probe-ptjms deletion completed in 8.393417414s

• [SLOW TEST:162.908 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:25:46.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  2 19:25:46.777: INFO: Waiting up to 5m0s for pod "pod-ad361772-2d95-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-5kxvq" to be "success or failure"
Jan  2 19:25:46.798: INFO: Pod "pod-ad361772-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.404221ms
Jan  2 19:25:48.876: INFO: Pod "pod-ad361772-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099609909s
Jan  2 19:25:50.899: INFO: Pod "pod-ad361772-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122662576s
Jan  2 19:25:53.141: INFO: Pod "pod-ad361772-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.364487193s
Jan  2 19:25:55.157: INFO: Pod "pod-ad361772-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.380601838s
Jan  2 19:25:57.184: INFO: Pod "pod-ad361772-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.407266639s
Jan  2 19:25:59.467: INFO: Pod "pod-ad361772-2d95-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.690662465s
STEP: Saw pod success
Jan  2 19:25:59.468: INFO: Pod "pod-ad361772-2d95-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:25:59.482: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ad361772-2d95-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 19:25:59.635: INFO: Waiting for pod pod-ad361772-2d95-11ea-814c-0242ac110005 to disappear
Jan  2 19:25:59.682: INFO: Pod pod-ad361772-2d95-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:25:59.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5kxvq" for this suite.
Jan  2 19:26:05.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:26:05.995: INFO: namespace: e2e-tests-emptydir-5kxvq, resource: bindings, ignored listing per whitelist
Jan  2 19:26:06.019: INFO: namespace e2e-tests-emptydir-5kxvq deletion completed in 6.320123837s

• [SLOW TEST:19.454 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:26:06.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:26:06.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-lp6tq" for this suite.
Jan  2 19:26:30.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:26:30.904: INFO: namespace: e2e-tests-pods-lp6tq, resource: bindings, ignored listing per whitelist
Jan  2 19:26:30.942: INFO: namespace e2e-tests-pods-lp6tq deletion completed in 24.576908034s

• [SLOW TEST:24.923 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:26:30.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-c7b43be6-2d95-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 19:26:31.244: INFO: Waiting up to 5m0s for pod "pod-secrets-c7b57ef4-2d95-11ea-814c-0242ac110005" in namespace "e2e-tests-secrets-jd9tk" to be "success or failure"
Jan  2 19:26:31.352: INFO: Pod "pod-secrets-c7b57ef4-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 107.631555ms
Jan  2 19:26:33.535: INFO: Pod "pod-secrets-c7b57ef4-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290851234s
Jan  2 19:26:35.554: INFO: Pod "pod-secrets-c7b57ef4-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309964922s
Jan  2 19:26:37.576: INFO: Pod "pod-secrets-c7b57ef4-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.331184708s
Jan  2 19:26:40.491: INFO: Pod "pod-secrets-c7b57ef4-2d95-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.246923259s
Jan  2 19:26:42.516: INFO: Pod "pod-secrets-c7b57ef4-2d95-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.271094168s
STEP: Saw pod success
Jan  2 19:26:42.516: INFO: Pod "pod-secrets-c7b57ef4-2d95-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:26:42.524: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-c7b57ef4-2d95-11ea-814c-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan  2 19:26:42.900: INFO: Waiting for pod pod-secrets-c7b57ef4-2d95-11ea-814c-0242ac110005 to disappear
Jan  2 19:26:42.927: INFO: Pod pod-secrets-c7b57ef4-2d95-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:26:42.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-jd9tk" for this suite.
Jan  2 19:26:48.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:26:49.158: INFO: namespace: e2e-tests-secrets-jd9tk, resource: bindings, ignored listing per whitelist
Jan  2 19:26:49.254: INFO: namespace e2e-tests-secrets-jd9tk deletion completed in 6.32094689s

• [SLOW TEST:18.311 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:26:49.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  2 19:27:00.168: INFO: Successfully updated pod "annotationupdated29f0056-2d95-11ea-814c-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:27:02.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-w6vld" for this suite.
Jan  2 19:27:26.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:27:26.359: INFO: namespace: e2e-tests-downward-api-w6vld, resource: bindings, ignored listing per whitelist
Jan  2 19:27:26.838: INFO: namespace e2e-tests-downward-api-w6vld deletion completed in 24.558880817s

• [SLOW TEST:37.583 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:27:26.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  2 19:27:27.065: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-86z4s,SelfLink:/api/v1/namespaces/e2e-tests-watch-86z4s/configmaps/e2e-watch-test-watch-closed,UID:e8fb7234-2d95-11ea-a994-fa163e34d433,ResourceVersion:16952110,Generation:0,CreationTimestamp:2020-01-02 19:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 19:27:27.065: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-86z4s,SelfLink:/api/v1/namespaces/e2e-tests-watch-86z4s/configmaps/e2e-watch-test-watch-closed,UID:e8fb7234-2d95-11ea-a994-fa163e34d433,ResourceVersion:16952111,Generation:0,CreationTimestamp:2020-01-02 19:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  2 19:27:27.121: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-86z4s,SelfLink:/api/v1/namespaces/e2e-tests-watch-86z4s/configmaps/e2e-watch-test-watch-closed,UID:e8fb7234-2d95-11ea-a994-fa163e34d433,ResourceVersion:16952112,Generation:0,CreationTimestamp:2020-01-02 19:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 19:27:27.121: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-86z4s,SelfLink:/api/v1/namespaces/e2e-tests-watch-86z4s/configmaps/e2e-watch-test-watch-closed,UID:e8fb7234-2d95-11ea-a994-fa163e34d433,ResourceVersion:16952113,Generation:0,CreationTimestamp:2020-01-02 19:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:27:27.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-86z4s" for this suite.
Jan  2 19:27:33.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:27:33.251: INFO: namespace: e2e-tests-watch-86z4s, resource: bindings, ignored listing per whitelist
Jan  2 19:27:33.300: INFO: namespace e2e-tests-watch-86z4s deletion completed in 6.167273781s

• [SLOW TEST:6.461 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:27:33.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan  2 19:27:33.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tj9h6'
Jan  2 19:27:34.258: INFO: stderr: ""
Jan  2 19:27:34.258: INFO: stdout: "pod/pause created\n"
Jan  2 19:27:34.258: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan  2 19:27:34.258: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-tj9h6" to be "running and ready"
Jan  2 19:27:34.325: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 66.169541ms
Jan  2 19:27:36.644: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.385309054s
Jan  2 19:27:38.664: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.405484682s
Jan  2 19:27:41.010: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.751395368s
Jan  2 19:27:43.024: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.765338674s
Jan  2 19:27:45.036: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.777358792s
Jan  2 19:27:45.036: INFO: Pod "pause" satisfied condition "running and ready"
Jan  2 19:27:45.036: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan  2 19:27:45.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-tj9h6'
Jan  2 19:27:45.302: INFO: stderr: ""
Jan  2 19:27:45.303: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan  2 19:27:45.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tj9h6'
Jan  2 19:27:45.474: INFO: stderr: ""
Jan  2 19:27:45.474: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan  2 19:27:45.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-tj9h6'
Jan  2 19:27:45.590: INFO: stderr: ""
Jan  2 19:27:45.590: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan  2 19:27:45.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tj9h6'
Jan  2 19:27:45.696: INFO: stderr: ""
Jan  2 19:27:45.696: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan  2 19:27:45.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tj9h6'
Jan  2 19:27:45.970: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 19:27:45.970: INFO: stdout: "pod \"pause\" force deleted\n"
Jan  2 19:27:45.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-tj9h6'
Jan  2 19:27:46.298: INFO: stderr: "No resources found.\n"
Jan  2 19:27:46.298: INFO: stdout: ""
Jan  2 19:27:46.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-tj9h6 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 19:27:46.420: INFO: stderr: ""
Jan  2 19:27:46.420: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:27:46.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tj9h6" for this suite.
Jan  2 19:27:54.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:27:54.209: INFO: namespace: e2e-tests-kubectl-tj9h6, resource: bindings, ignored listing per whitelist
Jan  2 19:27:54.235: INFO: namespace e2e-tests-kubectl-tj9h6 deletion completed in 7.798368357s

• [SLOW TEST:20.935 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:27:54.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan  2 19:27:54.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-f2qlt run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan  2 19:28:04.277: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0102 19:28:02.266805     417 log.go:172] (0xc0001386e0) (0xc0008ba140) Create stream\nI0102 19:28:02.267105     417 log.go:172] (0xc0001386e0) (0xc0008ba140) Stream added, broadcasting: 1\nI0102 19:28:02.274112     417 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0102 19:28:02.274188     417 log.go:172] (0xc0001386e0) (0xc0008f41e0) Create stream\nI0102 19:28:02.274199     417 log.go:172] (0xc0001386e0) (0xc0008f41e0) Stream added, broadcasting: 3\nI0102 19:28:02.275663     417 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0102 19:28:02.275702     417 log.go:172] (0xc0001386e0) (0xc0008f4280) Create stream\nI0102 19:28:02.275712     417 log.go:172] (0xc0001386e0) (0xc0008f4280) Stream added, broadcasting: 5\nI0102 19:28:02.277340     417 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0102 19:28:02.277373     417 log.go:172] (0xc0001386e0) (0xc0008f4320) Create stream\nI0102 19:28:02.277381     417 log.go:172] (0xc0001386e0) (0xc0008f4320) Stream added, broadcasting: 7\nI0102 19:28:02.278951     417 log.go:172] (0xc0001386e0) Reply frame received for 7\nI0102 19:28:02.279337     417 log.go:172] (0xc0008f41e0) (3) Writing data frame\nI0102 19:28:02.279586     417 log.go:172] (0xc0008f41e0) (3) Writing data frame\nI0102 19:28:02.285558     417 log.go:172] (0xc0001386e0) Data frame received for 5\nI0102 19:28:02.285569     417 log.go:172] (0xc0008f4280) (5) Data frame handling\nI0102 19:28:02.285578     417 log.go:172] (0xc0008f4280) (5) Data frame sent\nI0102 19:28:02.289782     417 log.go:172] (0xc0001386e0) Data frame received for 5\nI0102 19:28:02.289800     417 log.go:172] (0xc0008f4280) (5) Data frame handling\nI0102 19:28:02.289813     417 log.go:172] (0xc0008f4280) (5) Data frame sent\nI0102 19:28:04.196109     417 log.go:172] (0xc0001386e0) (0xc0008f41e0) Stream removed, broadcasting: 3\nI0102 19:28:04.196433     417 log.go:172] (0xc0001386e0) Data frame received for 1\nI0102 19:28:04.196472     417 log.go:172] (0xc0008ba140) (1) Data frame handling\nI0102 19:28:04.196495     417 log.go:172] (0xc0008ba140) (1) Data frame sent\nI0102 19:28:04.196529     417 log.go:172] (0xc0001386e0) (0xc0008ba140) Stream removed, broadcasting: 1\nI0102 19:28:04.196589     417 log.go:172] (0xc0001386e0) (0xc0008f4280) Stream removed, broadcasting: 5\nI0102 19:28:04.196963     417 log.go:172] (0xc0001386e0) (0xc0008f4320) Stream removed, broadcasting: 7\nI0102 19:28:04.197005     417 log.go:172] (0xc0001386e0) Go away received\nI0102 19:28:04.197171     417 log.go:172] (0xc0001386e0) (0xc0008ba140) Stream removed, broadcasting: 1\nI0102 19:28:04.197204     417 log.go:172] (0xc0001386e0) (0xc0008f41e0) Stream removed, broadcasting: 3\nI0102 19:28:04.197218     417 log.go:172] (0xc0001386e0) (0xc0008f4280) Stream removed, broadcasting: 5\nI0102 19:28:04.197234     417 log.go:172] (0xc0001386e0) (0xc0008f4320) Stream removed, broadcasting: 7\n"
Jan  2 19:28:04.278: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:28:06.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-f2qlt" for this suite.
Jan  2 19:28:12.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:28:12.439: INFO: namespace: e2e-tests-kubectl-f2qlt, resource: bindings, ignored listing per whitelist
Jan  2 19:28:12.570: INFO: namespace e2e-tests-kubectl-f2qlt deletion completed in 6.265015381s

• [SLOW TEST:18.335 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:28:12.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-04389ed5-2d96-11ea-814c-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-04389eb1-2d96-11ea-814c-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  2 19:28:12.871: INFO: Waiting up to 5m0s for pod "projected-volume-04389e72-2d96-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-48gqr" to be "success or failure"
Jan  2 19:28:12.885: INFO: Pod "projected-volume-04389e72-2d96-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.548575ms
Jan  2 19:28:15.340: INFO: Pod "projected-volume-04389e72-2d96-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.469061473s
Jan  2 19:28:17.405: INFO: Pod "projected-volume-04389e72-2d96-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.534057822s
Jan  2 19:28:20.059: INFO: Pod "projected-volume-04389e72-2d96-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.187836065s
Jan  2 19:28:22.084: INFO: Pod "projected-volume-04389e72-2d96-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.212813074s
Jan  2 19:28:24.100: INFO: Pod "projected-volume-04389e72-2d96-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.228744217s
STEP: Saw pod success
Jan  2 19:28:24.100: INFO: Pod "projected-volume-04389e72-2d96-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:28:24.105: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-04389e72-2d96-11ea-814c-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan  2 19:28:25.098: INFO: Waiting for pod projected-volume-04389e72-2d96-11ea-814c-0242ac110005 to disappear
Jan  2 19:28:25.393: INFO: Pod projected-volume-04389e72-2d96-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:28:25.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-48gqr" for this suite.
Jan  2 19:28:31.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:28:31.586: INFO: namespace: e2e-tests-projected-48gqr, resource: bindings, ignored listing per whitelist
Jan  2 19:28:31.711: INFO: namespace e2e-tests-projected-48gqr deletion completed in 6.298220985s

• [SLOW TEST:19.140 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:28:31.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-0fa9da7e-2d96-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 19:28:32.003: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0fb09dd6-2d96-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-d2zvq" to be "success or failure"
Jan  2 19:28:32.012: INFO: Pod "pod-projected-configmaps-0fb09dd6-2d96-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.84786ms
Jan  2 19:28:34.071: INFO: Pod "pod-projected-configmaps-0fb09dd6-2d96-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067434662s
Jan  2 19:28:36.087: INFO: Pod "pod-projected-configmaps-0fb09dd6-2d96-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083709357s
Jan  2 19:28:38.117: INFO: Pod "pod-projected-configmaps-0fb09dd6-2d96-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113410348s
Jan  2 19:28:40.136: INFO: Pod "pod-projected-configmaps-0fb09dd6-2d96-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132838858s
Jan  2 19:28:42.150: INFO: Pod "pod-projected-configmaps-0fb09dd6-2d96-11ea-814c-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.146708884s
Jan  2 19:28:44.165: INFO: Pod "pod-projected-configmaps-0fb09dd6-2d96-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.161817906s
STEP: Saw pod success
Jan  2 19:28:44.165: INFO: Pod "pod-projected-configmaps-0fb09dd6-2d96-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:28:44.171: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-0fb09dd6-2d96-11ea-814c-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 19:28:45.544: INFO: Waiting for pod pod-projected-configmaps-0fb09dd6-2d96-11ea-814c-0242ac110005 to disappear
Jan  2 19:28:45.557: INFO: Pod pod-projected-configmaps-0fb09dd6-2d96-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:28:45.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d2zvq" for this suite.
Jan  2 19:28:51.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:28:51.811: INFO: namespace: e2e-tests-projected-d2zvq, resource: bindings, ignored listing per whitelist
Jan  2 19:28:51.863: INFO: namespace e2e-tests-projected-d2zvq deletion completed in 6.300011612s

• [SLOW TEST:20.151 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:28:51.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  2 19:28:52.790: INFO: Pod name wrapped-volume-race-1c11afda-2d96-11ea-814c-0242ac110005: Found 0 pods out of 5
Jan  2 19:28:57.821: INFO: Pod name wrapped-volume-race-1c11afda-2d96-11ea-814c-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-1c11afda-2d96-11ea-814c-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vtkzd, will wait for the garbage collector to delete the pods
Jan  2 19:31:32.086: INFO: Deleting ReplicationController wrapped-volume-race-1c11afda-2d96-11ea-814c-0242ac110005 took: 23.263761ms
Jan  2 19:31:32.386: INFO: Terminating ReplicationController wrapped-volume-race-1c11afda-2d96-11ea-814c-0242ac110005 pods took: 300.635017ms
STEP: Creating RC which spawns configmap-volume pods
Jan  2 19:32:22.988: INFO: Pod name wrapped-volume-race-994329a1-2d96-11ea-814c-0242ac110005: Found 0 pods out of 5
Jan  2 19:32:28.014: INFO: Pod name wrapped-volume-race-994329a1-2d96-11ea-814c-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-994329a1-2d96-11ea-814c-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vtkzd, will wait for the garbage collector to delete the pods
Jan  2 19:35:02.357: INFO: Deleting ReplicationController wrapped-volume-race-994329a1-2d96-11ea-814c-0242ac110005 took: 105.528172ms
Jan  2 19:35:02.958: INFO: Terminating ReplicationController wrapped-volume-race-994329a1-2d96-11ea-814c-0242ac110005 pods took: 601.002579ms
STEP: Creating RC which spawns configmap-volume pods
Jan  2 19:35:52.953: INFO: Pod name wrapped-volume-race-166a5182-2d97-11ea-814c-0242ac110005: Found 0 pods out of 5
Jan  2 19:35:57.980: INFO: Pod name wrapped-volume-race-166a5182-2d97-11ea-814c-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-166a5182-2d97-11ea-814c-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vtkzd, will wait for the garbage collector to delete the pods
Jan  2 19:37:40.172: INFO: Deleting ReplicationController wrapped-volume-race-166a5182-2d97-11ea-814c-0242ac110005 took: 40.331538ms
Jan  2 19:37:40.473: INFO: Terminating ReplicationController wrapped-volume-race-166a5182-2d97-11ea-814c-0242ac110005 pods took: 300.928512ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:38:26.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-vtkzd" for this suite.
Jan  2 19:38:34.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:38:34.959: INFO: namespace: e2e-tests-emptydir-wrapper-vtkzd, resource: bindings, ignored listing per whitelist
Jan  2 19:38:34.978: INFO: namespace e2e-tests-emptydir-wrapper-vtkzd deletion completed in 8.208082051s

• [SLOW TEST:583.115 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:38:34.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  2 19:38:35.182: INFO: Waiting up to 5m0s for pod "pod-7737833f-2d97-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-795sd" to be "success or failure"
Jan  2 19:38:35.271: INFO: Pod "pod-7737833f-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 88.856425ms
Jan  2 19:38:37.560: INFO: Pod "pod-7737833f-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377331418s
Jan  2 19:38:40.850: INFO: Pod "pod-7737833f-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.66799422s
Jan  2 19:38:42.873: INFO: Pod "pod-7737833f-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.691005396s
Jan  2 19:38:44.889: INFO: Pod "pod-7737833f-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.706853617s
Jan  2 19:38:46.917: INFO: Pod "pod-7737833f-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.734739412s
Jan  2 19:38:48.940: INFO: Pod "pod-7737833f-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.75805314s
Jan  2 19:38:50.971: INFO: Pod "pod-7737833f-2d97-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.788280735s
STEP: Saw pod success
Jan  2 19:38:50.971: INFO: Pod "pod-7737833f-2d97-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:38:50.982: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7737833f-2d97-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 19:38:51.362: INFO: Waiting for pod pod-7737833f-2d97-11ea-814c-0242ac110005 to disappear
Jan  2 19:38:51.380: INFO: Pod pod-7737833f-2d97-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:38:51.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-795sd" for this suite.
Jan  2 19:38:57.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:38:57.636: INFO: namespace: e2e-tests-emptydir-795sd, resource: bindings, ignored listing per whitelist
Jan  2 19:38:57.687: INFO: namespace e2e-tests-emptydir-795sd deletion completed in 6.290284864s

• [SLOW TEST:22.708 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:38:57.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 19:38:57.900: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84bf393f-2d97-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-bp6kk" to be "success or failure"
Jan  2 19:38:58.115: INFO: Pod "downwardapi-volume-84bf393f-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 214.640846ms
Jan  2 19:39:00.132: INFO: Pod "downwardapi-volume-84bf393f-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231488377s
Jan  2 19:39:02.160: INFO: Pod "downwardapi-volume-84bf393f-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259630861s
Jan  2 19:39:04.530: INFO: Pod "downwardapi-volume-84bf393f-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.629427075s
Jan  2 19:39:06.789: INFO: Pod "downwardapi-volume-84bf393f-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.888928206s
Jan  2 19:39:08.813: INFO: Pod "downwardapi-volume-84bf393f-2d97-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.912234361s
STEP: Saw pod success
Jan  2 19:39:08.813: INFO: Pod "downwardapi-volume-84bf393f-2d97-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:39:08.831: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-84bf393f-2d97-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 19:39:09.715: INFO: Waiting for pod downwardapi-volume-84bf393f-2d97-11ea-814c-0242ac110005 to disappear
Jan  2 19:39:09.749: INFO: Pod downwardapi-volume-84bf393f-2d97-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:39:09.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bp6kk" for this suite.
Jan  2 19:39:16.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:39:16.407: INFO: namespace: e2e-tests-downward-api-bp6kk, resource: bindings, ignored listing per whitelist
Jan  2 19:39:16.667: INFO: namespace e2e-tests-downward-api-bp6kk deletion completed in 6.905155501s

• [SLOW TEST:18.981 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:39:16.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  2 19:39:17.152: INFO: Waiting up to 5m0s for pod "pod-903c4da3-2d97-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-k7flb" to be "success or failure"
Jan  2 19:39:17.163: INFO: Pod "pod-903c4da3-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.204723ms
Jan  2 19:39:19.182: INFO: Pod "pod-903c4da3-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029969555s
Jan  2 19:39:21.201: INFO: Pod "pod-903c4da3-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049644253s
Jan  2 19:39:23.226: INFO: Pod "pod-903c4da3-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074336778s
Jan  2 19:39:25.245: INFO: Pod "pod-903c4da3-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092745316s
Jan  2 19:39:27.274: INFO: Pod "pod-903c4da3-2d97-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.121900767s
STEP: Saw pod success
Jan  2 19:39:27.274: INFO: Pod "pod-903c4da3-2d97-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:39:27.288: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-903c4da3-2d97-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 19:39:27.459: INFO: Waiting for pod pod-903c4da3-2d97-11ea-814c-0242ac110005 to disappear
Jan  2 19:39:27.466: INFO: Pod pod-903c4da3-2d97-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:39:27.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k7flb" for this suite.
Jan  2 19:39:33.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:39:33.950: INFO: namespace: e2e-tests-emptydir-k7flb, resource: bindings, ignored listing per whitelist
Jan  2 19:39:34.038: INFO: namespace e2e-tests-emptydir-k7flb deletion completed in 6.565878919s

• [SLOW TEST:17.369 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:39:34.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-j8bhd
Jan  2 19:39:44.393: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-j8bhd
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 19:39:44.398: INFO: Initial restart count of pod liveness-exec is 0
Jan  2 19:40:39.356: INFO: Restart count of pod e2e-tests-container-probe-j8bhd/liveness-exec is now 1 (54.957936581s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:40:39.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-j8bhd" for this suite.
Jan  2 19:40:47.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:40:47.866: INFO: namespace: e2e-tests-container-probe-j8bhd, resource: bindings, ignored listing per whitelist
Jan  2 19:40:47.962: INFO: namespace e2e-tests-container-probe-j8bhd deletion completed in 8.198046049s

• [SLOW TEST:73.923 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:40:47.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 19:40:48.193: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:40:49.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-mm4qw" for this suite.
Jan  2 19:40:55.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:40:55.643: INFO: namespace: e2e-tests-custom-resource-definition-mm4qw, resource: bindings, ignored listing per whitelist
Jan  2 19:40:55.762: INFO: namespace e2e-tests-custom-resource-definition-mm4qw deletion completed in 6.224855894s

• [SLOW TEST:7.798 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:40:55.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  2 19:41:16.317: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 19:41:16.426: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 19:41:18.426: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 19:41:18.909: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 19:41:20.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 19:41:20.442: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 19:41:22.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 19:41:22.452: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 19:41:24.426: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 19:41:24.437: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 19:41:26.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 19:41:26.455: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 19:41:28.426: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 19:41:28.471: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 19:41:30.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 19:41:30.452: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 19:41:32.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 19:41:32.437: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 19:41:34.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 19:41:34.439: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:41:34.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-p6w2c" for this suite.
Jan  2 19:41:58.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:41:58.671: INFO: namespace: e2e-tests-container-lifecycle-hook-p6w2c, resource: bindings, ignored listing per whitelist
Jan  2 19:41:59.033: INFO: namespace e2e-tests-container-lifecycle-hook-p6w2c deletion completed in 24.58423446s

• [SLOW TEST:63.270 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:41:59.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 19:41:59.232: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0d7cbf9-2d97-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-sw9pf" to be "success or failure"
Jan  2 19:41:59.244: INFO: Pod "downwardapi-volume-f0d7cbf9-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.854751ms
Jan  2 19:42:01.266: INFO: Pod "downwardapi-volume-f0d7cbf9-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033379099s
Jan  2 19:42:03.280: INFO: Pod "downwardapi-volume-f0d7cbf9-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047375804s
Jan  2 19:42:05.697: INFO: Pod "downwardapi-volume-f0d7cbf9-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464107059s
Jan  2 19:42:08.204: INFO: Pod "downwardapi-volume-f0d7cbf9-2d97-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.971394837s
Jan  2 19:42:10.224: INFO: Pod "downwardapi-volume-f0d7cbf9-2d97-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.991459365s
STEP: Saw pod success
Jan  2 19:42:10.224: INFO: Pod "downwardapi-volume-f0d7cbf9-2d97-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:42:10.231: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f0d7cbf9-2d97-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 19:42:10.681: INFO: Waiting for pod downwardapi-volume-f0d7cbf9-2d97-11ea-814c-0242ac110005 to disappear
Jan  2 19:42:10.695: INFO: Pod downwardapi-volume-f0d7cbf9-2d97-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:42:10.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sw9pf" for this suite.
Jan  2 19:42:16.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:42:17.123: INFO: namespace: e2e-tests-projected-sw9pf, resource: bindings, ignored listing per whitelist
Jan  2 19:42:17.183: INFO: namespace e2e-tests-projected-sw9pf deletion completed in 6.477811956s

• [SLOW TEST:18.150 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:42:17.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  2 19:42:17.480: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:42:35.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-7dwhf" for this suite.
Jan  2 19:42:43.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:42:43.861: INFO: namespace: e2e-tests-init-container-7dwhf, resource: bindings, ignored listing per whitelist
Jan  2 19:42:43.946: INFO: namespace e2e-tests-init-container-7dwhf deletion completed in 8.297686734s

• [SLOW TEST:26.762 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:42:43.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-0bb29689-2d98-11ea-814c-0242ac110005
STEP: Creating secret with name s-test-opt-upd-0bb296f7-2d98-11ea-814c-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0bb29689-2d98-11ea-814c-0242ac110005
STEP: Updating secret s-test-opt-upd-0bb296f7-2d98-11ea-814c-0242ac110005
STEP: Creating secret with name s-test-opt-create-0bb2974d-2d98-11ea-814c-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:44:14.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6nkvk" for this suite.
Jan  2 19:44:38.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:44:38.206: INFO: namespace: e2e-tests-secrets-6nkvk, resource: bindings, ignored listing per whitelist
Jan  2 19:44:38.282: INFO: namespace e2e-tests-secrets-6nkvk deletion completed in 24.210696036s

• [SLOW TEST:114.336 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:44:38.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 19:44:38.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:44:53.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4q7b7" for this suite.
Jan  2 19:45:47.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:45:47.530: INFO: namespace: e2e-tests-pods-4q7b7, resource: bindings, ignored listing per whitelist
Jan  2 19:45:47.561: INFO: namespace e2e-tests-pods-4q7b7 deletion completed in 54.223726153s

• [SLOW TEST:69.278 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:45:47.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-79103508-2d98-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 19:45:47.795: INFO: Waiting up to 5m0s for pod "pod-configmaps-7910e392-2d98-11ea-814c-0242ac110005" in namespace "e2e-tests-configmap-ww2px" to be "success or failure"
Jan  2 19:45:47.810: INFO: Pod "pod-configmaps-7910e392-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.933525ms
Jan  2 19:45:49.834: INFO: Pod "pod-configmaps-7910e392-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03859773s
Jan  2 19:45:51.859: INFO: Pod "pod-configmaps-7910e392-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064194633s
Jan  2 19:45:53.901: INFO: Pod "pod-configmaps-7910e392-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105573862s
Jan  2 19:45:55.927: INFO: Pod "pod-configmaps-7910e392-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132209007s
Jan  2 19:45:58.048: INFO: Pod "pod-configmaps-7910e392-2d98-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.252655528s
STEP: Saw pod success
Jan  2 19:45:58.048: INFO: Pod "pod-configmaps-7910e392-2d98-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:45:58.056: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7910e392-2d98-11ea-814c-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 19:45:58.858: INFO: Waiting for pod pod-configmaps-7910e392-2d98-11ea-814c-0242ac110005 to disappear
Jan  2 19:45:58.885: INFO: Pod pod-configmaps-7910e392-2d98-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:45:58.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ww2px" for this suite.
Jan  2 19:46:04.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:46:05.012: INFO: namespace: e2e-tests-configmap-ww2px, resource: bindings, ignored listing per whitelist
Jan  2 19:46:05.124: INFO: namespace e2e-tests-configmap-ww2px deletion completed in 6.231947078s

• [SLOW TEST:17.563 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:46:05.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan  2 19:46:05.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan  2 19:46:07.242: INFO: stderr: ""
Jan  2 19:46:07.242: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:46:07.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vvfc7" for this suite.
Jan  2 19:46:13.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:46:13.519: INFO: namespace: e2e-tests-kubectl-vvfc7, resource: bindings, ignored listing per whitelist
Jan  2 19:46:13.531: INFO: namespace e2e-tests-kubectl-vvfc7 deletion completed in 6.271322314s

• [SLOW TEST:8.406 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:46:13.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan  2 19:46:13.969: INFO: Waiting up to 5m0s for pod "client-containers-88980381-2d98-11ea-814c-0242ac110005" in namespace "e2e-tests-containers-27qbs" to be "success or failure"
Jan  2 19:46:14.015: INFO: Pod "client-containers-88980381-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.143928ms
Jan  2 19:46:16.165: INFO: Pod "client-containers-88980381-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195942136s
Jan  2 19:46:18.185: INFO: Pod "client-containers-88980381-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215834513s
Jan  2 19:46:20.225: INFO: Pod "client-containers-88980381-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256173078s
Jan  2 19:46:22.244: INFO: Pod "client-containers-88980381-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.274588581s
Jan  2 19:46:24.256: INFO: Pod "client-containers-88980381-2d98-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.286827357s
STEP: Saw pod success
Jan  2 19:46:24.256: INFO: Pod "client-containers-88980381-2d98-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:46:24.260: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-88980381-2d98-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 19:46:24.572: INFO: Waiting for pod client-containers-88980381-2d98-11ea-814c-0242ac110005 to disappear
Jan  2 19:46:24.585: INFO: Pod client-containers-88980381-2d98-11ea-814c-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:46:24.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-27qbs" for this suite.
Jan  2 19:46:32.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:46:32.151: INFO: namespace: e2e-tests-containers-27qbs, resource: bindings, ignored listing per whitelist
Jan  2 19:46:32.287: INFO: namespace e2e-tests-containers-27qbs deletion completed in 6.634376889s

• [SLOW TEST:18.756 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:46:32.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-93e6eb43-2d98-11ea-814c-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-93e6eb9a-2d98-11ea-814c-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-93e6eb43-2d98-11ea-814c-0242ac110005
STEP: Updating configmap cm-test-opt-upd-93e6eb9a-2d98-11ea-814c-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-93e6ebc5-2d98-11ea-814c-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:47:54.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pctsz" for this suite.
Jan  2 19:48:19.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:48:19.075: INFO: namespace: e2e-tests-projected-pctsz, resource: bindings, ignored listing per whitelist
Jan  2 19:48:19.180: INFO: namespace e2e-tests-projected-pctsz deletion completed in 24.202610445s

• [SLOW TEST:106.893 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:48:19.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 19:48:19.519: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan  2 19:48:24.537: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  2 19:48:30.562: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan  2 19:48:32.592: INFO: Creating deployment "test-rollover-deployment"
Jan  2 19:48:32.654: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan  2 19:48:34.998: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan  2 19:48:35.507: INFO: Ensure that both replica sets have 1 created replica
Jan  2 19:48:35.526: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan  2 19:48:35.549: INFO: Updating deployment test-rollover-deployment
Jan  2 19:48:35.549: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan  2 19:48:38.122: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan  2 19:48:38.144: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan  2 19:48:38.333: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 19:48:38.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591316, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 19:48:40.397: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 19:48:40.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591316, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 19:48:42.696: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 19:48:42.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591316, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 19:48:44.378: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 19:48:44.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591316, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 19:48:46.467: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 19:48:46.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591316, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 19:48:48.418: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 19:48:48.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591326, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 19:48:50.368: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 19:48:50.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591326, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 19:48:52.361: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 19:48:52.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591326, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 19:48:54.355: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 19:48:54.355: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591326, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 19:48:56.353: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 19:48:56.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591326, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713591312, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 19:48:58.673: INFO: 
Jan  2 19:48:58.673: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 19:48:58.698: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-wp2j9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wp2j9/deployments/test-rollover-deployment,UID:db50570b-2d98-11ea-a994-fa163e34d433,ResourceVersion:16954620,Generation:2,CreationTimestamp:2020-01-02 19:48:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-02 19:48:32 +0000 UTC 2020-01-02 19:48:32 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-02 19:48:57 +0000 UTC 2020-01-02 19:48:32 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  2 19:48:58.705: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-wp2j9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wp2j9/replicasets/test-rollover-deployment-5b8479fdb6,UID:dd148ac1-2d98-11ea-a994-fa163e34d433,ResourceVersion:16954610,Generation:2,CreationTimestamp:2020-01-02 19:48:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment db50570b-2d98-11ea-a994-fa163e34d433 0xc0021b5037 0xc0021b5038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  2 19:48:58.705: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan  2 19:48:58.705: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-wp2j9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wp2j9/replicasets/test-rollover-controller,UID:d3759334-2d98-11ea-a994-fa163e34d433,ResourceVersion:16954619,Generation:2,CreationTimestamp:2020-01-02 19:48:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment db50570b-2d98-11ea-a994-fa163e34d433 0xc0021b4e37 0xc0021b4e38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 19:48:58.706: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-wp2j9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wp2j9/replicasets/test-rollover-deployment-58494b7559,UID:db5cb93e-2d98-11ea-a994-fa163e34d433,ResourceVersion:16954577,Generation:2,CreationTimestamp:2020-01-02 19:48:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment db50570b-2d98-11ea-a994-fa163e34d433 0xc0021b4ef7 0xc0021b4ef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 19:48:58.716: INFO: Pod "test-rollover-deployment-5b8479fdb6-h554k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-h554k,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-wp2j9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wp2j9/pods/test-rollover-deployment-5b8479fdb6-h554k,UID:dd9540ad-2d98-11ea-a994-fa163e34d433,ResourceVersion:16954595,Generation:0,CreationTimestamp:2020-01-02 19:48:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 dd148ac1-2d98-11ea-a994-fa163e34d433 0xc0021e1227 0xc0021e1228}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-blk4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-blk4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-blk4f true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021e1310} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021e1330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 19:48:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 19:48:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 19:48:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 19:48:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-02 19:48:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-02 19:48:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://386c9242558d78102ac9304ce6bee6a428b9e67bfee7e917bc9cbac373a47235}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:48:58.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-wp2j9" for this suite.
Jan  2 19:49:08.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:49:08.897: INFO: namespace: e2e-tests-deployment-wp2j9, resource: bindings, ignored listing per whitelist
Jan  2 19:49:09.060: INFO: namespace e2e-tests-deployment-wp2j9 deletion completed in 10.329145101s

• [SLOW TEST:49.878 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:49:09.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-f1255f82-2d98-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 19:49:09.310: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f1265562-2d98-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-sj4br" to be "success or failure"
Jan  2 19:49:09.379: INFO: Pod "pod-projected-secrets-f1265562-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 69.314523ms
Jan  2 19:49:11.435: INFO: Pod "pod-projected-secrets-f1265562-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125503331s
Jan  2 19:49:13.461: INFO: Pod "pod-projected-secrets-f1265562-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150957606s
Jan  2 19:49:15.510: INFO: Pod "pod-projected-secrets-f1265562-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200277569s
Jan  2 19:49:17.528: INFO: Pod "pod-projected-secrets-f1265562-2d98-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.218303605s
Jan  2 19:49:19.650: INFO: Pod "pod-projected-secrets-f1265562-2d98-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.340197434s
STEP: Saw pod success
Jan  2 19:49:19.650: INFO: Pod "pod-projected-secrets-f1265562-2d98-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:49:19.658: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-f1265562-2d98-11ea-814c-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 19:49:19.932: INFO: Waiting for pod pod-projected-secrets-f1265562-2d98-11ea-814c-0242ac110005 to disappear
Jan  2 19:49:19.950: INFO: Pod pod-projected-secrets-f1265562-2d98-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:49:19.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sj4br" for this suite.
Jan  2 19:49:26.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:49:26.355: INFO: namespace: e2e-tests-projected-sj4br, resource: bindings, ignored listing per whitelist
Jan  2 19:49:26.381: INFO: namespace e2e-tests-projected-sj4br deletion completed in 6.30126933s

• [SLOW TEST:17.322 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:49:26.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 19:49:26.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:49:37.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8qmk8" for this suite.
Jan  2 19:50:25.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:50:25.160: INFO: namespace: e2e-tests-pods-8qmk8, resource: bindings, ignored listing per whitelist
Jan  2 19:50:25.279: INFO: namespace e2e-tests-pods-8qmk8 deletion completed in 48.236150297s

• [SLOW TEST:58.898 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:50:25.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-flwtk
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan  2 19:50:25.774: INFO: Found 0 stateful pods, waiting for 3
Jan  2 19:50:35.798: INFO: Found 1 stateful pods, waiting for 3
Jan  2 19:50:45.829: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 19:50:45.829: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 19:50:45.829: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 19:50:55.793: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 19:50:55.793: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 19:50:55.793: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Jan  2 19:51:05.803: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 19:51:05.803: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 19:51:05.803: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  2 19:51:05.888: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  2 19:51:16.024: INFO: Updating stateful set ss2
Jan  2 19:51:16.050: INFO: Waiting for Pod e2e-tests-statefulset-flwtk/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  2 19:51:26.793: INFO: Found 2 stateful pods, waiting for 3
Jan  2 19:51:36.828: INFO: Found 2 stateful pods, waiting for 3
Jan  2 19:51:47.088: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 19:51:47.088: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 19:51:47.088: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 19:51:56.817: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 19:51:56.817: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 19:51:56.817: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  2 19:51:56.906: INFO: Updating stateful set ss2
Jan  2 19:51:57.004: INFO: Waiting for Pod e2e-tests-statefulset-flwtk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 19:52:07.057: INFO: Updating stateful set ss2
Jan  2 19:52:07.184: INFO: Waiting for StatefulSet e2e-tests-statefulset-flwtk/ss2 to complete update
Jan  2 19:52:07.184: INFO: Waiting for Pod e2e-tests-statefulset-flwtk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 19:52:17.283: INFO: Waiting for StatefulSet e2e-tests-statefulset-flwtk/ss2 to complete update
Jan  2 19:52:17.283: INFO: Waiting for Pod e2e-tests-statefulset-flwtk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 19:52:27.215: INFO: Waiting for StatefulSet e2e-tests-statefulset-flwtk/ss2 to complete update
Jan  2 19:52:27.215: INFO: Waiting for Pod e2e-tests-statefulset-flwtk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 19:52:37.241: INFO: Waiting for StatefulSet e2e-tests-statefulset-flwtk/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 19:52:47.214: INFO: Deleting all statefulset in ns e2e-tests-statefulset-flwtk
Jan  2 19:52:47.220: INFO: Scaling statefulset ss2 to 0
Jan  2 19:53:17.279: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 19:53:17.289: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:53:17.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-flwtk" for this suite.
Jan  2 19:53:25.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:53:25.611: INFO: namespace: e2e-tests-statefulset-flwtk, resource: bindings, ignored listing per whitelist
Jan  2 19:53:25.671: INFO: namespace e2e-tests-statefulset-flwtk deletion completed in 8.257226286s

• [SLOW TEST:180.392 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:53:25.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-8a4aafdb-2d99-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 19:53:26.198: INFO: Waiting up to 5m0s for pod "pod-secrets-8a4cb8e0-2d99-11ea-814c-0242ac110005" in namespace "e2e-tests-secrets-t8snp" to be "success or failure"
Jan  2 19:53:26.540: INFO: Pod "pod-secrets-8a4cb8e0-2d99-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 342.268349ms
Jan  2 19:53:28.652: INFO: Pod "pod-secrets-8a4cb8e0-2d99-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.453889934s
Jan  2 19:53:30.671: INFO: Pod "pod-secrets-8a4cb8e0-2d99-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472924925s
Jan  2 19:53:32.687: INFO: Pod "pod-secrets-8a4cb8e0-2d99-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.488383612s
Jan  2 19:53:34.708: INFO: Pod "pod-secrets-8a4cb8e0-2d99-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51021346s
Jan  2 19:53:36.740: INFO: Pod "pod-secrets-8a4cb8e0-2d99-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.542042257s
Jan  2 19:53:38.760: INFO: Pod "pod-secrets-8a4cb8e0-2d99-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.561934472s
STEP: Saw pod success
Jan  2 19:53:38.761: INFO: Pod "pod-secrets-8a4cb8e0-2d99-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 19:53:38.766: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-8a4cb8e0-2d99-11ea-814c-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 19:53:38.949: INFO: Waiting for pod pod-secrets-8a4cb8e0-2d99-11ea-814c-0242ac110005 to disappear
Jan  2 19:53:38.957: INFO: Pod pod-secrets-8a4cb8e0-2d99-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:53:38.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-t8snp" for this suite.
Jan  2 19:53:45.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:53:45.116: INFO: namespace: e2e-tests-secrets-t8snp, resource: bindings, ignored listing per whitelist
Jan  2 19:53:45.159: INFO: namespace e2e-tests-secrets-t8snp deletion completed in 6.191686275s

• [SLOW TEST:19.486 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:53:45.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  2 19:53:57.947: INFO: Successfully updated pod "pod-update-95ba50e4-2d99-11ea-814c-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan  2 19:53:57.990: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:53:57.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-kt7d6" for this suite.
Jan  2 19:54:22.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:54:22.245: INFO: namespace: e2e-tests-pods-kt7d6, resource: bindings, ignored listing per whitelist
Jan  2 19:54:22.349: INFO: namespace e2e-tests-pods-kt7d6 deletion completed in 24.344877165s

• [SLOW TEST:37.190 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:54:22.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 19:54:22.916: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  2 19:54:22.949: INFO: Number of nodes with available pods: 0
Jan  2 19:54:22.949: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:24.486: INFO: Number of nodes with available pods: 0
Jan  2 19:54:24.486: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:25.007: INFO: Number of nodes with available pods: 0
Jan  2 19:54:25.007: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:25.989: INFO: Number of nodes with available pods: 0
Jan  2 19:54:25.989: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:26.973: INFO: Number of nodes with available pods: 0
Jan  2 19:54:26.974: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:27.973: INFO: Number of nodes with available pods: 0
Jan  2 19:54:27.973: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:29.458: INFO: Number of nodes with available pods: 0
Jan  2 19:54:29.458: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:30.656: INFO: Number of nodes with available pods: 0
Jan  2 19:54:30.656: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:30.972: INFO: Number of nodes with available pods: 0
Jan  2 19:54:30.972: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:32.099: INFO: Number of nodes with available pods: 0
Jan  2 19:54:32.099: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:32.974: INFO: Number of nodes with available pods: 1
Jan  2 19:54:32.974: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  2 19:54:33.110: INFO: Wrong image for pod: daemon-set-tvntm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 19:54:34.178: INFO: Wrong image for pod: daemon-set-tvntm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 19:54:35.153: INFO: Wrong image for pod: daemon-set-tvntm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 19:54:36.374: INFO: Wrong image for pod: daemon-set-tvntm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 19:54:37.147: INFO: Wrong image for pod: daemon-set-tvntm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 19:54:38.420: INFO: Wrong image for pod: daemon-set-tvntm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 19:54:39.146: INFO: Wrong image for pod: daemon-set-tvntm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 19:54:40.149: INFO: Wrong image for pod: daemon-set-tvntm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 19:54:41.145: INFO: Wrong image for pod: daemon-set-tvntm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 19:54:41.145: INFO: Pod daemon-set-tvntm is not available
Jan  2 19:54:42.953: INFO: Pod daemon-set-62n4z is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  2 19:54:43.335: INFO: Number of nodes with available pods: 0
Jan  2 19:54:43.335: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:44.355: INFO: Number of nodes with available pods: 0
Jan  2 19:54:44.355: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:45.380: INFO: Number of nodes with available pods: 0
Jan  2 19:54:45.380: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:46.353: INFO: Number of nodes with available pods: 0
Jan  2 19:54:46.353: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:48.367: INFO: Number of nodes with available pods: 0
Jan  2 19:54:48.367: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:49.357: INFO: Number of nodes with available pods: 0
Jan  2 19:54:49.357: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:50.420: INFO: Number of nodes with available pods: 0
Jan  2 19:54:50.420: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:51.377: INFO: Number of nodes with available pods: 0
Jan  2 19:54:51.377: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 19:54:52.395: INFO: Number of nodes with available pods: 1
Jan  2 19:54:52.395: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-c2k8w, will wait for the garbage collector to delete the pods
Jan  2 19:54:52.606: INFO: Deleting DaemonSet.extensions daemon-set took: 61.073636ms
Jan  2 19:54:52.707: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.513886ms
Jan  2 19:55:00.712: INFO: Number of nodes with available pods: 0
Jan  2 19:55:00.712: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 19:55:00.716: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-c2k8w/daemonsets","resourceVersion":"16955483"},"items":null}

Jan  2 19:55:00.719: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-c2k8w/pods","resourceVersion":"16955483"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:55:00.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-c2k8w" for this suite.
Jan  2 19:55:06.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:55:06.905: INFO: namespace: e2e-tests-daemonsets-c2k8w, resource: bindings, ignored listing per whitelist
Jan  2 19:55:07.247: INFO: namespace e2e-tests-daemonsets-c2k8w deletion completed in 6.512746761s

• [SLOW TEST:44.897 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:55:07.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jdjv8
Jan  2 19:55:19.693: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jdjv8
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 19:55:19.699: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:59:20.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-jdjv8" for this suite.
Jan  2 19:59:26.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 19:59:26.203: INFO: namespace: e2e-tests-container-probe-jdjv8, resource: bindings, ignored listing per whitelist
Jan  2 19:59:26.311: INFO: namespace e2e-tests-container-probe-jdjv8 deletion completed in 6.238139741s

• [SLOW TEST:259.064 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 19:59:26.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  2 19:59:39.707: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 19:59:40.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-lk2fw" for this suite.
Jan  2 20:00:09.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:00:09.802: INFO: namespace: e2e-tests-replicaset-lk2fw, resource: bindings, ignored listing per whitelist
Jan  2 20:00:09.827: INFO: namespace e2e-tests-replicaset-lk2fw deletion completed in 29.033425115s

• [SLOW TEST:43.515 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:00:09.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-7b069578-2d9a-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 20:00:10.273: INFO: Waiting up to 5m0s for pod "pod-secrets-7b24cd8c-2d9a-11ea-814c-0242ac110005" in namespace "e2e-tests-secrets-644jl" to be "success or failure"
Jan  2 20:00:10.287: INFO: Pod "pod-secrets-7b24cd8c-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.315407ms
Jan  2 20:00:12.322: INFO: Pod "pod-secrets-7b24cd8c-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048571479s
Jan  2 20:00:14.356: INFO: Pod "pod-secrets-7b24cd8c-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082864235s
Jan  2 20:00:16.862: INFO: Pod "pod-secrets-7b24cd8c-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588918903s
Jan  2 20:00:18.889: INFO: Pod "pod-secrets-7b24cd8c-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.61593423s
Jan  2 20:00:20.906: INFO: Pod "pod-secrets-7b24cd8c-2d9a-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.633205902s
STEP: Saw pod success
Jan  2 20:00:20.906: INFO: Pod "pod-secrets-7b24cd8c-2d9a-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:00:20.915: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7b24cd8c-2d9a-11ea-814c-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 20:00:22.116: INFO: Waiting for pod pod-secrets-7b24cd8c-2d9a-11ea-814c-0242ac110005 to disappear
Jan  2 20:00:22.128: INFO: Pod pod-secrets-7b24cd8c-2d9a-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:00:22.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-644jl" for this suite.
Jan  2 20:00:28.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:00:28.333: INFO: namespace: e2e-tests-secrets-644jl, resource: bindings, ignored listing per whitelist
Jan  2 20:00:28.491: INFO: namespace e2e-tests-secrets-644jl deletion completed in 6.328781278s
STEP: Destroying namespace "e2e-tests-secret-namespace-9chgl" for this suite.
Jan  2 20:00:34.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:00:34.661: INFO: namespace: e2e-tests-secret-namespace-9chgl, resource: bindings, ignored listing per whitelist
Jan  2 20:00:34.695: INFO: namespace e2e-tests-secret-namespace-9chgl deletion completed in 6.204444086s

• [SLOW TEST:24.868 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:00:34.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 20:00:34.857: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  2 20:00:34.884: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  2 20:00:39.957: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  2 20:00:46.001: INFO: Creating deployment "test-rolling-update-deployment"
Jan  2 20:00:46.033: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  2 20:00:46.074: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  2 20:00:48.550: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  2 20:00:48.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 20:00:50.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 20:00:52.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 20:00:54.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713592046, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 20:00:56.604: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 20:00:56.630: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-9d845,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9d845/deployments/test-rolling-update-deployment,UID:9076721a-2d9a-11ea-a994-fa163e34d433,ResourceVersion:16956063,Generation:1,CreationTimestamp:2020-01-02 20:00:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-02 20:00:46 +0000 UTC 2020-01-02 20:00:46 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-02 20:00:56 +0000 UTC 2020-01-02 20:00:46 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  2 20:00:56.635: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-9d845,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9d845/replicasets/test-rolling-update-deployment-75db98fb4c,UID:909bddcc-2d9a-11ea-a994-fa163e34d433,ResourceVersion:16956054,Generation:1,CreationTimestamp:2020-01-02 20:00:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9076721a-2d9a-11ea-a994-fa163e34d433 0xc001158cc7 0xc001158cc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  2 20:00:56.635: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  2 20:00:56.636: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-9d845,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9d845/replicasets/test-rolling-update-controller,UID:89d0cfea-2d9a-11ea-a994-fa163e34d433,ResourceVersion:16956062,Generation:2,CreationTimestamp:2020-01-02 20:00:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9076721a-2d9a-11ea-a994-fa163e34d433 0xc001158bf7 0xc001158bf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 20:00:56.646: INFO: Pod "test-rolling-update-deployment-75db98fb4c-r5br7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-r5br7,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-9d845,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9d845/pods/test-rolling-update-deployment-75db98fb4c-r5br7,UID:909d9d7c-2d9a-11ea-a994-fa163e34d433,ResourceVersion:16956053,Generation:0,CreationTimestamp:2020-01-02 20:00:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 909bddcc-2d9a-11ea-a994-fa163e34d433 0xc00230e167 0xc00230e168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mq94m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mq94m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-mq94m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00230e1d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00230e1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:00:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:00:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:00:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:00:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-02 20:00:46 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-02 20:00:55 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d2680911d1251504b717ccdfd99ad89adec0047a5a2491584719120f6e8353e6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:00:56.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-9d845" for this suite.
Jan  2 20:01:04.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:01:04.800: INFO: namespace: e2e-tests-deployment-9d845, resource: bindings, ignored listing per whitelist
Jan  2 20:01:04.824: INFO: namespace e2e-tests-deployment-9d845 deletion completed in 8.164791255s

• [SLOW TEST:30.128 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:01:04.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan  2 20:01:06.283: INFO: Waiting up to 5m0s for pod "client-containers-9c82eefe-2d9a-11ea-814c-0242ac110005" in namespace "e2e-tests-containers-kdxc8" to be "success or failure"
Jan  2 20:01:06.568: INFO: Pod "client-containers-9c82eefe-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 284.261356ms
Jan  2 20:01:08.681: INFO: Pod "client-containers-9c82eefe-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.397516509s
Jan  2 20:01:10.704: INFO: Pod "client-containers-9c82eefe-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.420414704s
Jan  2 20:01:12.729: INFO: Pod "client-containers-9c82eefe-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445992641s
Jan  2 20:01:14.777: INFO: Pod "client-containers-9c82eefe-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.493775904s
Jan  2 20:01:16.817: INFO: Pod "client-containers-9c82eefe-2d9a-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.533359524s
STEP: Saw pod success
Jan  2 20:01:16.817: INFO: Pod "client-containers-9c82eefe-2d9a-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:01:16.839: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-9c82eefe-2d9a-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 20:01:17.125: INFO: Waiting for pod client-containers-9c82eefe-2d9a-11ea-814c-0242ac110005 to disappear
Jan  2 20:01:17.284: INFO: Pod client-containers-9c82eefe-2d9a-11ea-814c-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:01:17.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-kdxc8" for this suite.
Jan  2 20:01:23.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:01:23.503: INFO: namespace: e2e-tests-containers-kdxc8, resource: bindings, ignored listing per whitelist
Jan  2 20:01:23.541: INFO: namespace e2e-tests-containers-kdxc8 deletion completed in 6.228620945s

• [SLOW TEST:18.716 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:01:23.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-a705aeeb-2d9a-11ea-814c-0242ac110005
Jan  2 20:01:23.948: INFO: Pod name my-hostname-basic-a705aeeb-2d9a-11ea-814c-0242ac110005: Found 0 pods out of 1
Jan  2 20:01:30.646: INFO: Pod name my-hostname-basic-a705aeeb-2d9a-11ea-814c-0242ac110005: Found 1 pods out of 1
Jan  2 20:01:30.646: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-a705aeeb-2d9a-11ea-814c-0242ac110005" are running
Jan  2 20:01:36.698: INFO: Pod "my-hostname-basic-a705aeeb-2d9a-11ea-814c-0242ac110005-llbkl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 20:01:24 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 20:01:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a705aeeb-2d9a-11ea-814c-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 20:01:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a705aeeb-2d9a-11ea-814c-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 20:01:23 +0000 UTC Reason: Message:}])
Jan  2 20:01:36.698: INFO: Trying to dial the pod
Jan  2 20:01:41.748: INFO: Controller my-hostname-basic-a705aeeb-2d9a-11ea-814c-0242ac110005: Got expected result from replica 1 [my-hostname-basic-a705aeeb-2d9a-11ea-814c-0242ac110005-llbkl]: "my-hostname-basic-a705aeeb-2d9a-11ea-814c-0242ac110005-llbkl", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:01:41.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-qg4n9" for this suite.
Jan  2 20:01:49.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:01:49.907: INFO: namespace: e2e-tests-replication-controller-qg4n9, resource: bindings, ignored listing per whitelist
Jan  2 20:01:49.974: INFO: namespace e2e-tests-replication-controller-qg4n9 deletion completed in 8.216591038s

• [SLOW TEST:26.432 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:01:49.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-b7306690-2d9a-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 20:01:51.023: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b7325b93-2d9a-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-9pdn8" to be "success or failure"
Jan  2 20:01:51.055: INFO: Pod "pod-projected-secrets-b7325b93-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.030786ms
Jan  2 20:01:53.118: INFO: Pod "pod-projected-secrets-b7325b93-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094531454s
Jan  2 20:01:55.137: INFO: Pod "pod-projected-secrets-b7325b93-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113659992s
Jan  2 20:01:57.847: INFO: Pod "pod-projected-secrets-b7325b93-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.823953588s
Jan  2 20:01:59.935: INFO: Pod "pod-projected-secrets-b7325b93-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.911588823s
Jan  2 20:02:01.951: INFO: Pod "pod-projected-secrets-b7325b93-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.928133291s
Jan  2 20:02:03.968: INFO: Pod "pod-projected-secrets-b7325b93-2d9a-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.944510262s
STEP: Saw pod success
Jan  2 20:02:03.968: INFO: Pod "pod-projected-secrets-b7325b93-2d9a-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:02:03.973: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b7325b93-2d9a-11ea-814c-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 20:02:04.166: INFO: Waiting for pod pod-projected-secrets-b7325b93-2d9a-11ea-814c-0242ac110005 to disappear
Jan  2 20:02:04.532: INFO: Pod pod-projected-secrets-b7325b93-2d9a-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:02:04.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9pdn8" for this suite.
Jan  2 20:02:10.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:02:11.031: INFO: namespace: e2e-tests-projected-9pdn8, resource: bindings, ignored listing per whitelist
Jan  2 20:02:11.061: INFO: namespace e2e-tests-projected-9pdn8 deletion completed in 6.429049872s

• [SLOW TEST:21.086 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:02:11.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan  2 20:02:21.521: INFO: Pod pod-hostip-c365289f-2d9a-11ea-814c-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:02:21.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-gns9s" for this suite.
Jan  2 20:02:45.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:02:45.698: INFO: namespace: e2e-tests-pods-gns9s, resource: bindings, ignored listing per whitelist
Jan  2 20:02:45.895: INFO: namespace e2e-tests-pods-gns9s deletion completed in 24.36483392s

• [SLOW TEST:34.832 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:02:45.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  2 20:02:46.080: INFO: Waiting up to 5m0s for pod "pod-d8057b0f-2d9a-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-tx94k" to be "success or failure"
Jan  2 20:02:46.181: INFO: Pod "pod-d8057b0f-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 100.995883ms
Jan  2 20:02:48.196: INFO: Pod "pod-d8057b0f-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115914859s
Jan  2 20:02:50.258: INFO: Pod "pod-d8057b0f-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178131134s
Jan  2 20:02:52.615: INFO: Pod "pod-d8057b0f-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535011535s
Jan  2 20:02:54.842: INFO: Pod "pod-d8057b0f-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.762080684s
Jan  2 20:02:56.879: INFO: Pod "pod-d8057b0f-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.799009343s
Jan  2 20:02:58.941: INFO: Pod "pod-d8057b0f-2d9a-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.860928653s
STEP: Saw pod success
Jan  2 20:02:58.941: INFO: Pod "pod-d8057b0f-2d9a-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:02:58.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d8057b0f-2d9a-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 20:02:59.096: INFO: Waiting for pod pod-d8057b0f-2d9a-11ea-814c-0242ac110005 to disappear
Jan  2 20:02:59.100: INFO: Pod pod-d8057b0f-2d9a-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:02:59.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tx94k" for this suite.
Jan  2 20:03:05.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:03:05.186: INFO: namespace: e2e-tests-emptydir-tx94k, resource: bindings, ignored listing per whitelist
Jan  2 20:03:05.299: INFO: namespace e2e-tests-emptydir-tx94k deletion completed in 6.192736394s

• [SLOW TEST:19.404 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:03:05.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  2 20:03:05.486: INFO: Waiting up to 5m0s for pod "downward-api-e39661f9-2d9a-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-fvqkh" to be "success or failure"
Jan  2 20:03:05.500: INFO: Pod "downward-api-e39661f9-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.75983ms
Jan  2 20:03:07.933: INFO: Pod "downward-api-e39661f9-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.446305338s
Jan  2 20:03:10.127: INFO: Pod "downward-api-e39661f9-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.640939277s
Jan  2 20:03:12.173: INFO: Pod "downward-api-e39661f9-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.686936053s
Jan  2 20:03:14.200: INFO: Pod "downward-api-e39661f9-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.714254056s
Jan  2 20:03:16.309: INFO: Pod "downward-api-e39661f9-2d9a-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.822613747s
Jan  2 20:03:18.330: INFO: Pod "downward-api-e39661f9-2d9a-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.84358938s
STEP: Saw pod success
Jan  2 20:03:18.330: INFO: Pod "downward-api-e39661f9-2d9a-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:03:18.342: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-e39661f9-2d9a-11ea-814c-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 20:03:18.799: INFO: Waiting for pod downward-api-e39661f9-2d9a-11ea-814c-0242ac110005 to disappear
Jan  2 20:03:18.984: INFO: Pod downward-api-e39661f9-2d9a-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:03:18.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fvqkh" for this suite.
Jan  2 20:03:25.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:03:25.110: INFO: namespace: e2e-tests-downward-api-fvqkh, resource: bindings, ignored listing per whitelist
Jan  2 20:03:25.213: INFO: namespace e2e-tests-downward-api-fvqkh deletion completed in 6.217867932s

• [SLOW TEST:19.913 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:03:25.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan  2 20:03:25.420: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix269285720/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:03:25.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x9psm" for this suite.
Jan  2 20:03:31.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:03:31.740: INFO: namespace: e2e-tests-kubectl-x9psm, resource: bindings, ignored listing per whitelist
Jan  2 20:03:31.780: INFO: namespace e2e-tests-kubectl-x9psm deletion completed in 6.19576952s

• [SLOW TEST:6.566 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:03:31.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-pftnm
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-pftnm
STEP: Deleting pre-stop pod
Jan  2 20:04:01.152: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:04:01.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-pftnm" for this suite.
Jan  2 20:04:45.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:04:45.716: INFO: namespace: e2e-tests-prestop-pftnm, resource: bindings, ignored listing per whitelist
Jan  2 20:04:45.718: INFO: namespace e2e-tests-prestop-pftnm deletion completed in 44.525960232s

• [SLOW TEST:73.937 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:04:45.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  2 20:04:45.963: INFO: Waiting up to 5m0s for pod "downward-api-1f7a4595-2d9b-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-lb6xj" to be "success or failure"
Jan  2 20:04:45.974: INFO: Pod "downward-api-1f7a4595-2d9b-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.986815ms
Jan  2 20:04:48.354: INFO: Pod "downward-api-1f7a4595-2d9b-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.390901105s
Jan  2 20:04:50.563: INFO: Pod "downward-api-1f7a4595-2d9b-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.599310535s
Jan  2 20:04:53.041: INFO: Pod "downward-api-1f7a4595-2d9b-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.077373625s
Jan  2 20:04:55.053: INFO: Pod "downward-api-1f7a4595-2d9b-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.089962717s
Jan  2 20:04:57.064: INFO: Pod "downward-api-1f7a4595-2d9b-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.100244532s
Jan  2 20:04:59.094: INFO: Pod "downward-api-1f7a4595-2d9b-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.130391257s
STEP: Saw pod success
Jan  2 20:04:59.094: INFO: Pod "downward-api-1f7a4595-2d9b-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:04:59.113: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-1f7a4595-2d9b-11ea-814c-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 20:04:59.399: INFO: Waiting for pod downward-api-1f7a4595-2d9b-11ea-814c-0242ac110005 to disappear
Jan  2 20:04:59.411: INFO: Pod downward-api-1f7a4595-2d9b-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:04:59.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lb6xj" for this suite.
Jan  2 20:05:05.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:05:05.702: INFO: namespace: e2e-tests-downward-api-lb6xj, resource: bindings, ignored listing per whitelist
Jan  2 20:05:05.787: INFO: namespace e2e-tests-downward-api-lb6xj deletion completed in 6.365828744s

• [SLOW TEST:20.069 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:05:05.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 20:05:05.978: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2b68a223-2d9b-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-7vlb8" to be "success or failure"
Jan  2 20:05:05.988: INFO: Pod "downwardapi-volume-2b68a223-2d9b-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.126945ms
Jan  2 20:05:08.007: INFO: Pod "downwardapi-volume-2b68a223-2d9b-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028914569s
Jan  2 20:05:10.054: INFO: Pod "downwardapi-volume-2b68a223-2d9b-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076506088s
Jan  2 20:05:12.976: INFO: Pod "downwardapi-volume-2b68a223-2d9b-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.998276341s
Jan  2 20:05:15.396: INFO: Pod "downwardapi-volume-2b68a223-2d9b-11ea-814c-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.417777385s
Jan  2 20:05:17.418: INFO: Pod "downwardapi-volume-2b68a223-2d9b-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.439950588s
STEP: Saw pod success
Jan  2 20:05:17.418: INFO: Pod "downwardapi-volume-2b68a223-2d9b-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:05:17.423: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2b68a223-2d9b-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 20:05:18.997: INFO: Waiting for pod downwardapi-volume-2b68a223-2d9b-11ea-814c-0242ac110005 to disappear
Jan  2 20:05:19.399: INFO: Pod downwardapi-volume-2b68a223-2d9b-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:05:19.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7vlb8" for this suite.
Jan  2 20:05:25.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:05:25.771: INFO: namespace: e2e-tests-projected-7vlb8, resource: bindings, ignored listing per whitelist
Jan  2 20:05:25.961: INFO: namespace e2e-tests-projected-7vlb8 deletion completed in 6.544756105s

• [SLOW TEST:20.174 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:05:25.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-xvhk2
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-xvhk2
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-xvhk2
Jan  2 20:05:26.234: INFO: Found 0 stateful pods, waiting for 1
Jan  2 20:05:36.251: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 20:05:46.259: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  2 20:05:46.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 20:05:47.286: INFO: stderr: "I0102 20:05:46.700655     498 log.go:172] (0xc0005f20b0) (0xc0007105a0) Create stream\nI0102 20:05:46.701037     498 log.go:172] (0xc0005f20b0) (0xc0007105a0) Stream added, broadcasting: 1\nI0102 20:05:46.709616     498 log.go:172] (0xc0005f20b0) Reply frame received for 1\nI0102 20:05:46.709687     498 log.go:172] (0xc0005f20b0) (0xc000710640) Create stream\nI0102 20:05:46.709708     498 log.go:172] (0xc0005f20b0) (0xc000710640) Stream added, broadcasting: 3\nI0102 20:05:46.711730     498 log.go:172] (0xc0005f20b0) Reply frame received for 3\nI0102 20:05:46.711807     498 log.go:172] (0xc0005f20b0) (0xc0008a40a0) Create stream\nI0102 20:05:46.711841     498 log.go:172] (0xc0005f20b0) (0xc0008a40a0) Stream added, broadcasting: 5\nI0102 20:05:46.713893     498 log.go:172] (0xc0005f20b0) Reply frame received for 5\nI0102 20:05:47.112721     498 log.go:172] (0xc0005f20b0) Data frame received for 3\nI0102 20:05:47.112795     498 log.go:172] (0xc000710640) (3) Data frame handling\nI0102 20:05:47.112809     498 log.go:172] (0xc000710640) (3) Data frame sent\nI0102 20:05:47.274702     498 log.go:172] (0xc0005f20b0) (0xc0008a40a0) Stream removed, broadcasting: 5\nI0102 20:05:47.274957     498 log.go:172] (0xc0005f20b0) Data frame received for 1\nI0102 20:05:47.274995     498 log.go:172] (0xc0005f20b0) (0xc000710640) Stream removed, broadcasting: 3\nI0102 20:05:47.275022     498 log.go:172] (0xc0007105a0) (1) Data frame handling\nI0102 20:05:47.275051     498 log.go:172] (0xc0007105a0) (1) Data frame sent\nI0102 20:05:47.275056     498 log.go:172] (0xc0005f20b0) (0xc0007105a0) Stream removed, broadcasting: 1\nI0102 20:05:47.275063     498 log.go:172] (0xc0005f20b0) Go away received\nI0102 20:05:47.275689     498 log.go:172] (0xc0005f20b0) (0xc0007105a0) Stream removed, broadcasting: 1\nI0102 20:05:47.275714     498 log.go:172] (0xc0005f20b0) (0xc000710640) Stream removed, broadcasting: 3\nI0102 20:05:47.275720     498 log.go:172] (0xc0005f20b0) (0xc0008a40a0) Stream removed, broadcasting: 5\n"
Jan  2 20:05:47.286: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 20:05:47.286: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 20:05:47.308: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  2 20:05:57.327: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 20:05:57.327: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 20:05:57.369: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999505s
Jan  2 20:05:58.391: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.981979583s
Jan  2 20:05:59.409: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.959887018s
Jan  2 20:06:00.428: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.941593681s
Jan  2 20:06:01.455: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.922521922s
Jan  2 20:06:02.479: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.895346969s
Jan  2 20:06:03.504: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.871372064s
Jan  2 20:06:04.547: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.846568502s
Jan  2 20:06:05.563: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.803775629s
Jan  2 20:06:06.621: INFO: Verifying statefulset ss doesn't scale past 1 for another 787.257832ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-xvhk2
Jan  2 20:06:07.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:06:08.324: INFO: stderr: "I0102 20:06:07.985592     520 log.go:172] (0xc000724370) (0xc000740640) Create stream\nI0102 20:06:07.985967     520 log.go:172] (0xc000724370) (0xc000740640) Stream added, broadcasting: 1\nI0102 20:06:07.995937     520 log.go:172] (0xc000724370) Reply frame received for 1\nI0102 20:06:07.996049     520 log.go:172] (0xc000724370) (0xc0007406e0) Create stream\nI0102 20:06:07.996063     520 log.go:172] (0xc000724370) (0xc0007406e0) Stream added, broadcasting: 3\nI0102 20:06:07.998344     520 log.go:172] (0xc000724370) Reply frame received for 3\nI0102 20:06:07.998385     520 log.go:172] (0xc000724370) (0xc0005eabe0) Create stream\nI0102 20:06:07.998397     520 log.go:172] (0xc000724370) (0xc0005eabe0) Stream added, broadcasting: 5\nI0102 20:06:07.999857     520 log.go:172] (0xc000724370) Reply frame received for 5\nI0102 20:06:08.118341     520 log.go:172] (0xc000724370) Data frame received for 3\nI0102 20:06:08.118910     520 log.go:172] (0xc0007406e0) (3) Data frame handling\nI0102 20:06:08.119007     520 log.go:172] (0xc0007406e0) (3) Data frame sent\nI0102 20:06:08.304812     520 log.go:172] (0xc000724370) Data frame received for 1\nI0102 20:06:08.305018     520 log.go:172] (0xc000724370) (0xc0007406e0) Stream removed, broadcasting: 3\nI0102 20:06:08.305085     520 log.go:172] (0xc000740640) (1) Data frame handling\nI0102 20:06:08.305130     520 log.go:172] (0xc000740640) (1) Data frame sent\nI0102 20:06:08.305140     520 log.go:172] (0xc000724370) (0xc000740640) Stream removed, broadcasting: 1\nI0102 20:06:08.305495     520 log.go:172] (0xc000724370) (0xc0005eabe0) Stream removed, broadcasting: 5\nI0102 20:06:08.305832     520 log.go:172] (0xc000724370) (0xc000740640) Stream removed, broadcasting: 1\nI0102 20:06:08.305840     520 log.go:172] (0xc000724370) (0xc0007406e0) Stream removed, broadcasting: 3\nI0102 20:06:08.305844     520 log.go:172] (0xc000724370) (0xc0005eabe0) Stream removed, broadcasting: 5\n"
Jan  2 20:06:08.324: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 20:06:08.324: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 20:06:08.347: INFO: Found 1 stateful pods, waiting for 3
Jan  2 20:06:18.365: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 20:06:18.365: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 20:06:18.365: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 20:06:28.369: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 20:06:28.369: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 20:06:28.369: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  2 20:06:28.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 20:06:29.072: INFO: stderr: "I0102 20:06:28.726430     542 log.go:172] (0xc0001386e0) (0xc0005e9360) Create stream\nI0102 20:06:28.726929     542 log.go:172] (0xc0001386e0) (0xc0005e9360) Stream added, broadcasting: 1\nI0102 20:06:28.740369     542 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0102 20:06:28.740481     542 log.go:172] (0xc0001386e0) (0xc0007a6000) Create stream\nI0102 20:06:28.740492     542 log.go:172] (0xc0001386e0) (0xc0007a6000) Stream added, broadcasting: 3\nI0102 20:06:28.743646     542 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0102 20:06:28.743741     542 log.go:172] (0xc0001386e0) (0xc000214000) Create stream\nI0102 20:06:28.743760     542 log.go:172] (0xc0001386e0) (0xc000214000) Stream added, broadcasting: 5\nI0102 20:06:28.745194     542 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0102 20:06:28.891155     542 log.go:172] (0xc0001386e0) Data frame received for 3\nI0102 20:06:28.891317     542 log.go:172] (0xc0007a6000) (3) Data frame handling\nI0102 20:06:28.891367     542 log.go:172] (0xc0007a6000) (3) Data frame sent\nI0102 20:06:29.055874     542 log.go:172] (0xc0001386e0) Data frame received for 1\nI0102 20:06:29.056138     542 log.go:172] (0xc0001386e0) (0xc000214000) Stream removed, broadcasting: 5\nI0102 20:06:29.056285     542 log.go:172] (0xc0001386e0) (0xc0007a6000) Stream removed, broadcasting: 3\nI0102 20:06:29.056360     542 log.go:172] (0xc0005e9360) (1) Data frame handling\nI0102 20:06:29.056374     542 log.go:172] (0xc0005e9360) (1) Data frame sent\nI0102 20:06:29.056380     542 log.go:172] (0xc0001386e0) (0xc0005e9360) Stream removed, broadcasting: 1\nI0102 20:06:29.056397     542 log.go:172] (0xc0001386e0) Go away received\nI0102 20:06:29.057216     542 log.go:172] (0xc0001386e0) (0xc0005e9360) Stream removed, broadcasting: 1\nI0102 20:06:29.057238     542 log.go:172] (0xc0001386e0) (0xc0007a6000) Stream removed, broadcasting: 3\nI0102 20:06:29.057245     542 log.go:172] (0xc0001386e0) (0xc000214000) Stream removed, broadcasting: 5\n"
Jan  2 20:06:29.073: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 20:06:29.073: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 20:06:29.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 20:06:29.587: INFO: stderr: "I0102 20:06:29.264816     564 log.go:172] (0xc000702370) (0xc0006454a0) Create stream\nI0102 20:06:29.265168     564 log.go:172] (0xc000702370) (0xc0006454a0) Stream added, broadcasting: 1\nI0102 20:06:29.272273     564 log.go:172] (0xc000702370) Reply frame received for 1\nI0102 20:06:29.272334     564 log.go:172] (0xc000702370) (0xc000524000) Create stream\nI0102 20:06:29.272346     564 log.go:172] (0xc000702370) (0xc000524000) Stream added, broadcasting: 3\nI0102 20:06:29.273610     564 log.go:172] (0xc000702370) Reply frame received for 3\nI0102 20:06:29.273640     564 log.go:172] (0xc000702370) (0xc0005240a0) Create stream\nI0102 20:06:29.273652     564 log.go:172] (0xc000702370) (0xc0005240a0) Stream added, broadcasting: 5\nI0102 20:06:29.275241     564 log.go:172] (0xc000702370) Reply frame received for 5\nI0102 20:06:29.439209     564 log.go:172] (0xc000702370) Data frame received for 3\nI0102 20:06:29.439297     564 log.go:172] (0xc000524000) (3) Data frame handling\nI0102 20:06:29.439320     564 log.go:172] (0xc000524000) (3) Data frame sent\nI0102 20:06:29.574894     564 log.go:172] (0xc000702370) (0xc0005240a0) Stream removed, broadcasting: 5\nI0102 20:06:29.575111     564 log.go:172] (0xc000702370) Data frame received for 1\nI0102 20:06:29.575158     564 log.go:172] (0xc000702370) (0xc000524000) Stream removed, broadcasting: 3\nI0102 20:06:29.575220     564 log.go:172] (0xc0006454a0) (1) Data frame handling\nI0102 20:06:29.575242     564 log.go:172] (0xc0006454a0) (1) Data frame sent\nI0102 20:06:29.575260     564 log.go:172] (0xc000702370) (0xc0006454a0) Stream removed, broadcasting: 1\nI0102 20:06:29.575277     564 log.go:172] (0xc000702370) Go away received\nI0102 20:06:29.576383     564 log.go:172] (0xc000702370) (0xc0006454a0) Stream removed, broadcasting: 1\nI0102 20:06:29.576418     564 log.go:172] (0xc000702370) (0xc000524000) Stream removed, broadcasting: 3\nI0102 20:06:29.576422     564 log.go:172] (0xc000702370) (0xc0005240a0) Stream removed, broadcasting: 5\n"
Jan  2 20:06:29.587: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 20:06:29.587: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 20:06:29.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 20:06:30.403: INFO: stderr: "I0102 20:06:30.079588     587 log.go:172] (0xc000138c60) (0xc000809860) Create stream\nI0102 20:06:30.080125     587 log.go:172] (0xc000138c60) (0xc000809860) Stream added, broadcasting: 1\nI0102 20:06:30.087884     587 log.go:172] (0xc000138c60) Reply frame received for 1\nI0102 20:06:30.088153     587 log.go:172] (0xc000138c60) (0xc00037a500) Create stream\nI0102 20:06:30.088176     587 log.go:172] (0xc000138c60) (0xc00037a500) Stream added, broadcasting: 3\nI0102 20:06:30.095607     587 log.go:172] (0xc000138c60) Reply frame received for 3\nI0102 20:06:30.095646     587 log.go:172] (0xc000138c60) (0xc0002fd720) Create stream\nI0102 20:06:30.095658     587 log.go:172] (0xc000138c60) (0xc0002fd720) Stream added, broadcasting: 5\nI0102 20:06:30.097835     587 log.go:172] (0xc000138c60) Reply frame received for 5\nI0102 20:06:30.251533     587 log.go:172] (0xc000138c60) Data frame received for 3\nI0102 20:06:30.251594     587 log.go:172] (0xc00037a500) (3) Data frame handling\nI0102 20:06:30.251606     587 log.go:172] (0xc00037a500) (3) Data frame sent\nI0102 20:06:30.387151     587 log.go:172] (0xc000138c60) (0xc00037a500) Stream removed, broadcasting: 3\nI0102 20:06:30.387418     587 log.go:172] (0xc000138c60) Data frame received for 1\nI0102 20:06:30.387449     587 log.go:172] (0xc000809860) (1) Data frame handling\nI0102 20:06:30.387470     587 log.go:172] (0xc000809860) (1) Data frame sent\nI0102 20:06:30.387492     587 log.go:172] (0xc000138c60) (0xc000809860) Stream removed, broadcasting: 1\nI0102 20:06:30.387509     587 log.go:172] (0xc000138c60) (0xc0002fd720) Stream removed, broadcasting: 5\nI0102 20:06:30.387524     587 log.go:172] (0xc000138c60) Go away received\nI0102 20:06:30.388315     587 log.go:172] (0xc000138c60) (0xc000809860) Stream removed, broadcasting: 1\nI0102 20:06:30.388372     587 log.go:172] (0xc000138c60) (0xc00037a500) Stream removed, broadcasting: 3\nI0102 20:06:30.388394     587 log.go:172] (0xc000138c60) (0xc0002fd720) Stream removed, broadcasting: 5\n"
Jan  2 20:06:30.403: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 20:06:30.403: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 20:06:30.403: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 20:06:30.424: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan  2 20:06:40.462: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 20:06:40.463: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 20:06:40.463: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 20:06:40.527: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999959s
Jan  2 20:06:41.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.96968708s
Jan  2 20:06:42.641: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.874772628s
Jan  2 20:06:43.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.855538952s
Jan  2 20:06:44.766: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.746076117s
Jan  2 20:06:45.835: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.730938108s
Jan  2 20:06:46.868: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.662055703s
Jan  2 20:06:47.908: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.629076712s
Jan  2 20:06:48.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.589114444s
Jan  2 20:06:49.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 566.537466ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-xvhk2
Jan  2 20:06:51.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:06:51.660: INFO: stderr: "I0102 20:06:51.294629     609 log.go:172] (0xc0006b60b0) (0xc0006dc5a0) Create stream\nI0102 20:06:51.294884     609 log.go:172] (0xc0006b60b0) (0xc0006dc5a0) Stream added, broadcasting: 1\nI0102 20:06:51.303022     609 log.go:172] (0xc0006b60b0) Reply frame received for 1\nI0102 20:06:51.303209     609 log.go:172] (0xc0006b60b0) (0xc00065cbe0) Create stream\nI0102 20:06:51.303225     609 log.go:172] (0xc0006b60b0) (0xc00065cbe0) Stream added, broadcasting: 3\nI0102 20:06:51.304785     609 log.go:172] (0xc0006b60b0) Reply frame received for 3\nI0102 20:06:51.304830     609 log.go:172] (0xc0006b60b0) (0xc0006dc640) Create stream\nI0102 20:06:51.304845     609 log.go:172] (0xc0006b60b0) (0xc0006dc640) Stream added, broadcasting: 5\nI0102 20:06:51.306527     609 log.go:172] (0xc0006b60b0) Reply frame received for 5\nI0102 20:06:51.456691     609 log.go:172] (0xc0006b60b0) Data frame received for 3\nI0102 20:06:51.456825     609 log.go:172] (0xc00065cbe0) (3) Data frame handling\nI0102 20:06:51.456852     609 log.go:172] (0xc00065cbe0) (3) Data frame sent\nI0102 20:06:51.639626     609 log.go:172] (0xc0006b60b0) Data frame received for 1\nI0102 20:06:51.640430     609 log.go:172] (0xc0006b60b0) (0xc00065cbe0) Stream removed, broadcasting: 3\nI0102 20:06:51.640577     609 log.go:172] (0xc0006dc5a0) (1) Data frame handling\nI0102 20:06:51.640609     609 log.go:172] (0xc0006dc5a0) (1) Data frame sent\nI0102 20:06:51.640759     609 log.go:172] (0xc0006b60b0) (0xc0006dc640) Stream removed, broadcasting: 5\nI0102 20:06:51.640897     609 log.go:172] (0xc0006b60b0) (0xc0006dc5a0) Stream removed, broadcasting: 1\nI0102 20:06:51.640963     609 log.go:172] (0xc0006b60b0) Go away received\nI0102 20:06:51.644067     609 log.go:172] (0xc0006b60b0) (0xc0006dc5a0) Stream removed, broadcasting: 1\nI0102 20:06:51.644120     609 log.go:172] (0xc0006b60b0) (0xc00065cbe0) Stream removed, broadcasting: 3\nI0102 20:06:51.644132     609 log.go:172] (0xc0006b60b0) (0xc0006dc640) Stream removed, broadcasting: 5\n"
Jan  2 20:06:51.661: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 20:06:51.661: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 20:06:51.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:06:52.473: INFO: stderr: "I0102 20:06:52.139405     630 log.go:172] (0xc0007202c0) (0xc0007a0640) Create stream\nI0102 20:06:52.139739     630 log.go:172] (0xc0007202c0) (0xc0007a0640) Stream added, broadcasting: 1\nI0102 20:06:52.145959     630 log.go:172] (0xc0007202c0) Reply frame received for 1\nI0102 20:06:52.146051     630 log.go:172] (0xc0007202c0) (0xc000666e60) Create stream\nI0102 20:06:52.146067     630 log.go:172] (0xc0007202c0) (0xc000666e60) Stream added, broadcasting: 3\nI0102 20:06:52.147413     630 log.go:172] (0xc0007202c0) Reply frame received for 3\nI0102 20:06:52.147442     630 log.go:172] (0xc0007202c0) (0xc0004ce000) Create stream\nI0102 20:06:52.147452     630 log.go:172] (0xc0007202c0) (0xc0004ce000) Stream added, broadcasting: 5\nI0102 20:06:52.148754     630 log.go:172] (0xc0007202c0) Reply frame received for 5\nI0102 20:06:52.292361     630 log.go:172] (0xc0007202c0) Data frame received for 3\nI0102 20:06:52.292508     630 log.go:172] (0xc000666e60) (3) Data frame handling\nI0102 20:06:52.292544     630 log.go:172] (0xc000666e60) (3) Data frame sent\nI0102 20:06:52.452276     630 log.go:172] (0xc0007202c0) Data frame received for 1\nI0102 20:06:52.452397     630 log.go:172] (0xc0007a0640) (1) Data frame handling\nI0102 20:06:52.452438     630 log.go:172] (0xc0007a0640) (1) Data frame sent\nI0102 20:06:52.452457     630 log.go:172] (0xc0007202c0) (0xc0007a0640) Stream removed, broadcasting: 1\nI0102 20:06:52.453080     630 log.go:172] (0xc0007202c0) (0xc000666e60) Stream removed, broadcasting: 3\nI0102 20:06:52.453244     630 log.go:172] (0xc0007202c0) (0xc0004ce000) Stream removed, broadcasting: 5\nI0102 20:06:52.453348     630 log.go:172] (0xc0007202c0) (0xc0007a0640) Stream removed, broadcasting: 1\nI0102 20:06:52.453363     630 log.go:172] (0xc0007202c0) (0xc000666e60) Stream removed, broadcasting: 3\nI0102 20:06:52.453371     630 log.go:172] (0xc0007202c0) (0xc0004ce000) Stream removed, broadcasting: 5\n"
Jan  2 20:06:52.473: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 20:06:52.473: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 20:06:52.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:06:53.357: INFO: rc: 126
Jan  2 20:06:53.357: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "process_linux.go:91: executing setns process caused \"exit status 21\"": unknown
 I0102 20:06:52.764967     652 log.go:172] (0xc00072e370) (0xc000673400) Create stream
I0102 20:06:52.765396     652 log.go:172] (0xc00072e370) (0xc000673400) Stream added, broadcasting: 1
I0102 20:06:52.774407     652 log.go:172] (0xc00072e370) Reply frame received for 1
I0102 20:06:52.774574     652 log.go:172] (0xc00072e370) (0xc0007a6000) Create stream
I0102 20:06:52.774598     652 log.go:172] (0xc00072e370) (0xc0007a6000) Stream added, broadcasting: 3
I0102 20:06:52.776299     652 log.go:172] (0xc00072e370) Reply frame received for 3
I0102 20:06:52.776336     652 log.go:172] (0xc00072e370) (0xc0006734a0) Create stream
I0102 20:06:52.776354     652 log.go:172] (0xc00072e370) (0xc0006734a0) Stream added, broadcasting: 5
I0102 20:06:52.778117     652 log.go:172] (0xc00072e370) Reply frame received for 5
I0102 20:06:53.344319     652 log.go:172] (0xc00072e370) Data frame received for 3
I0102 20:06:53.344456     652 log.go:172] (0xc0007a6000) (3) Data frame handling
I0102 20:06:53.344499     652 log.go:172] (0xc0007a6000) (3) Data frame sent
I0102 20:06:53.347900     652 log.go:172] (0xc00072e370) Data frame received for 1
I0102 20:06:53.347958     652 log.go:172] (0xc000673400) (1) Data frame handling
I0102 20:06:53.347982     652 log.go:172] (0xc000673400) (1) Data frame sent
I0102 20:06:53.348054     652 log.go:172] (0xc00072e370) (0xc000673400) Stream removed, broadcasting: 1
I0102 20:06:53.348218     652 log.go:172] (0xc00072e370) (0xc0007a6000) Stream removed, broadcasting: 3
I0102 20:06:53.348359     652 log.go:172] (0xc00072e370) (0xc0006734a0) Stream removed, broadcasting: 5
I0102 20:06:53.348460     652 log.go:172] (0xc00072e370) Go away received
I0102 20:06:53.348909     652 log.go:172] (0xc00072e370) (0xc000673400) Stream removed, broadcasting: 1
I0102 20:06:53.348926     652 log.go:172] (0xc00072e370) (0xc0007a6000) Stream removed, broadcasting: 3
I0102 20:06:53.348934     652 log.go:172] (0xc00072e370) (0xc0006734a0) Stream removed, broadcasting: 5
command terminated with exit code 126
 []  0xc00140e540 exit status 126   true [0xc00045abc8 0xc00045abe8 0xc00045ac38] [0xc00045abc8 0xc00045abe8 0xc00045ac38] [0xc00045abe0 0xc00045ac10] [0x935700 0x935700] 0xc0017d8a80 }:
Command stdout:
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "process_linux.go:91: executing setns process caused \"exit status 21\"": unknown

stderr:
I0102 20:06:52.764967     652 log.go:172] (0xc00072e370) (0xc000673400) Create stream
I0102 20:06:52.765396     652 log.go:172] (0xc00072e370) (0xc000673400) Stream added, broadcasting: 1
I0102 20:06:52.774407     652 log.go:172] (0xc00072e370) Reply frame received for 1
I0102 20:06:52.774574     652 log.go:172] (0xc00072e370) (0xc0007a6000) Create stream
I0102 20:06:52.774598     652 log.go:172] (0xc00072e370) (0xc0007a6000) Stream added, broadcasting: 3
I0102 20:06:52.776299     652 log.go:172] (0xc00072e370) Reply frame received for 3
I0102 20:06:52.776336     652 log.go:172] (0xc00072e370) (0xc0006734a0) Create stream
I0102 20:06:52.776354     652 log.go:172] (0xc00072e370) (0xc0006734a0) Stream added, broadcasting: 5
I0102 20:06:52.778117     652 log.go:172] (0xc00072e370) Reply frame received for 5
I0102 20:06:53.344319     652 log.go:172] (0xc00072e370) Data frame received for 3
I0102 20:06:53.344456     652 log.go:172] (0xc0007a6000) (3) Data frame handling
I0102 20:06:53.344499     652 log.go:172] (0xc0007a6000) (3) Data frame sent
I0102 20:06:53.347900     652 log.go:172] (0xc00072e370) Data frame received for 1
I0102 20:06:53.347958     652 log.go:172] (0xc000673400) (1) Data frame handling
I0102 20:06:53.347982     652 log.go:172] (0xc000673400) (1) Data frame sent
I0102 20:06:53.348054     652 log.go:172] (0xc00072e370) (0xc000673400) Stream removed, broadcasting: 1
I0102 20:06:53.348218     652 log.go:172] (0xc00072e370) (0xc0007a6000) Stream removed, broadcasting: 3
I0102 20:06:53.348359     652 log.go:172] (0xc00072e370) (0xc0006734a0) Stream removed, broadcasting: 5
I0102 20:06:53.348460     652 log.go:172] (0xc00072e370) Go away received
I0102 20:06:53.348909     652 log.go:172] (0xc00072e370) (0xc000673400) Stream removed, broadcasting: 1
I0102 20:06:53.348926     652 log.go:172] (0xc00072e370) (0xc0007a6000) Stream removed, broadcasting: 3
I0102 20:06:53.348934     652 log.go:172] (0xc00072e370) (0xc0006734a0) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126

Jan  2 20:07:03.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:07:03.534: INFO: rc: 1
Jan  2 20:07:03.535: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0019b8120 exit status 1   true [0xc001d60000 0xc001d60018 0xc001d60030] [0xc001d60000 0xc001d60018 0xc001d60030] [0xc001d60010 0xc001d60028] [0x935700 0x935700] 0xc0011d65a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:07:13.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:07:14.076: INFO: rc: 1
Jan  2 20:07:14.076: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a46ba0 exit status 1   true [0xc001534070 0xc001534088 0xc0015340a0] [0xc001534070 0xc001534088 0xc0015340a0] [0xc001534080 0xc001534098] [0x935700 0x935700] 0xc00153f1a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:07:24.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:07:24.194: INFO: rc: 1
Jan  2 20:07:24.194: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00140e690 exit status 1   true [0xc00045ac50 0xc00045acb8 0xc00045ad60] [0xc00045ac50 0xc00045acb8 0xc00045ad60] [0xc00045ac98 0xc00045ad30] [0x935700 0x935700] 0xc0017d9020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:07:34.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:07:34.318: INFO: rc: 1
Jan  2 20:07:34.319: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a46cc0 exit status 1   true [0xc0015340a8 0xc0015340c0 0xc0015340d8] [0xc0015340a8 0xc0015340c0 0xc0015340d8] [0xc0015340b8 0xc0015340d0] [0x935700 0x935700] 0xc00153f7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:07:44.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:07:44.513: INFO: rc: 1
Jan  2 20:07:44.513: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a47050 exit status 1   true [0xc0015340e0 0xc0015340f8 0xc001534110] [0xc0015340e0 0xc0015340f8 0xc001534110] [0xc0015340f0 0xc001534108] [0x935700 0x935700] 0xc00153fda0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:07:54.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:07:54.658: INFO: rc: 1
Jan  2 20:07:54.658: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00140e7b0 exit status 1   true [0xc00045ad98 0xc00045ae50 0xc00045aee0] [0xc00045ad98 0xc00045ae50 0xc00045aee0] [0xc00045ade0 0xc00045aeb0] [0x935700 0x935700] 0xc0017d9320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:08:04.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:08:04.810: INFO: rc: 1
Jan  2 20:08:04.810: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a471a0 exit status 1   true [0xc001534118 0xc001534130 0xc001534148] [0xc001534118 0xc001534130 0xc001534148] [0xc001534128 0xc001534140] [0x935700 0x935700] 0xc0017aea20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:08:14.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:08:14.957: INFO: rc: 1
Jan  2 20:08:14.957: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0019b82a0 exit status 1   true [0xc001d60038 0xc001d60050 0xc001d60068] [0xc001d60038 0xc001d60050 0xc001d60068] [0xc001d60048 0xc001d60060] [0x935700 0x935700] 0xc0011d6960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:08:24.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:08:25.072: INFO: rc: 1
Jan  2 20:08:25.072: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a472c0 exit status 1   true [0xc001534150 0xc001534168 0xc001534180] [0xc001534150 0xc001534168 0xc001534180] [0xc001534160 0xc001534178] [0x935700 0x935700] 0xc0017aed80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:08:35.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:08:35.272: INFO: rc: 1
Jan  2 20:08:35.272: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00140e900 exit status 1   true [0xc00045af60 0xc00045afd0 0xc00045b008] [0xc00045af60 0xc00045afd0 0xc00045b008] [0xc00045afc8 0xc00045aff0] [0x935700 0x935700] 0xc0017d9620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:08:45.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:08:45.500: INFO: rc: 1
Jan  2 20:08:45.500: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d5e180 exit status 1   true [0xc00180a000 0xc00180a018 0xc00180a030] [0xc00180a000 0xc00180a018 0xc00180a030] [0xc00180a010 0xc00180a028] [0x935700 0x935700] 0xc00153e300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:08:55.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:08:55.959: INFO: rc: 1
Jan  2 20:08:55.960: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00140e120 exit status 1   true [0xc00045a0a8 0xc00045a398 0xc00045a4f8] [0xc00045a0a8 0xc00045a398 0xc00045a4f8] [0xc00045a2c8 0xc00045a4a8] [0x935700 0x935700] 0xc001abb800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:09:05.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:09:06.124: INFO: rc: 1
Jan  2 20:09:06.124: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00140e270 exit status 1   true [0xc00045a618 0xc00045aa50 0xc00045ab10] [0xc00045a618 0xc00045aa50 0xc00045ab10] [0xc00045aa18 0xc00045aae8] [0x935700 0x935700] 0xc0017d8060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:09:16.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:09:16.316: INFO: rc: 1
Jan  2 20:09:16.317: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0019b8150 exit status 1   true [0xc001d60008 0xc001d60020 0xc001d60038] [0xc001d60008 0xc001d60020 0xc001d60038] [0xc001d60018 0xc001d60030] [0x935700 0x935700] 0xc001530300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:09:26.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:09:26.507: INFO: rc: 1
Jan  2 20:09:26.508: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d5e2d0 exit status 1   true [0xc00180a038 0xc00180a050 0xc00180a068] [0xc00180a038 0xc00180a050 0xc00180a068] [0xc00180a048 0xc00180a060] [0x935700 0x935700] 0xc00153f080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:09:36.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:09:36.641: INFO: rc: 1
Jan  2 20:09:36.642: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0019b82d0 exit status 1   true [0xc001d60040 0xc001d60058 0xc001d60070] [0xc001d60040 0xc001d60058 0xc001d60070] [0xc001d60050 0xc001d60068] [0x935700 0x935700] 0xc0015306c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:09:46.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:09:46.737: INFO: rc: 1
Jan  2 20:09:46.738: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d5e3f0 exit status 1   true [0xc00180a070 0xc00180a088 0xc00180a0a0] [0xc00180a070 0xc00180a088 0xc00180a0a0] [0xc00180a080 0xc00180a098] [0x935700 0x935700] 0xc00153f380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:09:56.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:09:56.909: INFO: rc: 1
Jan  2 20:09:56.909: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00140e390 exit status 1   true [0xc00045ab30 0xc00045abc8 0xc00045abe8] [0xc00045ab30 0xc00045abc8 0xc00045abe8] [0xc00045abb0 0xc00045abe0] [0x935700 0x935700] 0xc0017d8420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:10:06.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:10:07.016: INFO: rc: 1
Jan  2 20:10:07.016: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0019b84b0 exit status 1   true [0xc001d60078 0xc001d60090 0xc001d600a8] [0xc001d60078 0xc001d60090 0xc001d600a8] [0xc001d60088 0xc001d600a0] [0x935700 0x935700] 0xc001530c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:10:17.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:10:17.209: INFO: rc: 1
Jan  2 20:10:17.209: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00135e150 exit status 1   true [0xc001534000 0xc001534018 0xc001534030] [0xc001534000 0xc001534018 0xc001534030] [0xc001534010 0xc001534028] [0x935700 0x935700] 0xc0011d65a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:10:27.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:10:27.454: INFO: rc: 1
Jan  2 20:10:27.454: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0019b85d0 exit status 1   true [0xc001d600b0 0xc001d600c8 0xc001d600e0] [0xc001d600b0 0xc001d600c8 0xc001d600e0] [0xc001d600c0 0xc001d600d8] [0x935700 0x935700] 0xc001531320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:10:37.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:10:37.626: INFO: rc: 1
Jan  2 20:10:37.626: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00135e300 exit status 1   true [0xc001534038 0xc001534050 0xc001534068] [0xc001534038 0xc001534050 0xc001534068] [0xc001534048 0xc001534060] [0x935700 0x935700] 0xc0011d6960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:10:47.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:10:47.828: INFO: rc: 1
Jan  2 20:10:47.828: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a46150 exit status 1   true [0xc00103e068 0xc00103e0b0 0xc00103e0e0] [0xc00103e068 0xc00103e0b0 0xc00103e0e0] [0xc00103e090 0xc00103e0d0] [0x935700 0x935700] 0xc0017aec00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:10:57.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:10:57.977: INFO: rc: 1
Jan  2 20:10:57.977: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a46300 exit status 1   true [0xc00016e000 0xc001d60010 0xc001d60028] [0xc00016e000 0xc001d60010 0xc001d60028] [0xc001d60008 0xc001d60020] [0x935700 0x935700] 0xc001abb800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:11:07.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:11:08.156: INFO: rc: 1
Jan  2 20:11:08.157: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a46450 exit status 1   true [0xc001d60030 0xc001d60048 0xc001d60060] [0xc001d60030 0xc001d60048 0xc001d60060] [0xc001d60040 0xc001d60058] [0x935700 0x935700] 0xc000cd82a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:11:18.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:11:18.270: INFO: rc: 1
Jan  2 20:11:18.270: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00135e120 exit status 1   true [0xc00103e0e8 0xc00103e108 0xc00103e148] [0xc00103e0e8 0xc00103e108 0xc00103e148] [0xc00103e100 0xc00103e138] [0x935700 0x935700] 0xc001530300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:11:28.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:11:28.464: INFO: rc: 1
Jan  2 20:11:28.465: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0019b81b0 exit status 1   true [0xc001534000 0xc001534018 0xc001534030] [0xc001534000 0xc001534018 0xc001534030] [0xc001534010 0xc001534028] [0x935700 0x935700] 0xc0011d65a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:11:38.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:11:38.667: INFO: rc: 1
Jan  2 20:11:38.667: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00140e150 exit status 1   true [0xc00045a0a8 0xc00045a398 0xc00045a4f8] [0xc00045a0a8 0xc00045a398 0xc00045a4f8] [0xc00045a2c8 0xc00045a4a8] [0x935700 0x935700] 0xc0017d8240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:11:48.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:11:48.847: INFO: rc: 1
Jan  2 20:11:48.848: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00140e2a0 exit status 1   true [0xc00045a618 0xc00045aa50 0xc00045ab10] [0xc00045a618 0xc00045aa50 0xc00045ab10] [0xc00045aa18 0xc00045aae8] [0x935700 0x935700] 0xc0017d8600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jan  2 20:11:58.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xvhk2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:11:58.995: INFO: rc: 1
Jan  2 20:11:58.996: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan  2 20:11:58.996: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 20:11:59.024: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xvhk2
Jan  2 20:11:59.031: INFO: Scaling statefulset ss to 0
Jan  2 20:11:59.106: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 20:11:59.111: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:11:59.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-xvhk2" for this suite.
Jan  2 20:12:07.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:12:07.275: INFO: namespace: e2e-tests-statefulset-xvhk2, resource: bindings, ignored listing per whitelist
Jan  2 20:12:07.405: INFO: namespace e2e-tests-statefulset-xvhk2 deletion completed in 8.258603484s

• [SLOW TEST:401.442 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:12:07.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 20:12:08.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-xwxqr'
Jan  2 20:12:10.751: INFO: stderr: ""
Jan  2 20:12:10.751: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan  2 20:12:10.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xwxqr'
Jan  2 20:12:16.893: INFO: stderr: ""
Jan  2 20:12:16.893: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:12:16.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xwxqr" for this suite.
Jan  2 20:12:22.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:12:23.055: INFO: namespace: e2e-tests-kubectl-xwxqr, resource: bindings, ignored listing per whitelist
Jan  2 20:12:23.062: INFO: namespace e2e-tests-kubectl-xwxqr deletion completed in 6.15684417s

• [SLOW TEST:15.657 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:12:23.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  2 20:12:23.213: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  2 20:12:23.221: INFO: Waiting for terminating namespaces to be deleted...
Jan  2 20:12:23.225: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  2 20:12:23.241: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 20:12:23.241: INFO: 	Container coredns ready: true, restart count 0
Jan  2 20:12:23.241: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 20:12:23.241: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 20:12:23.241: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 20:12:23.241: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 20:12:23.241: INFO: 	Container coredns ready: true, restart count 0
Jan  2 20:12:23.241: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  2 20:12:23.241: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  2 20:12:23.241: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 20:12:23.241: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  2 20:12:23.241: INFO: 	Container weave ready: true, restart count 0
Jan  2 20:12:23.241: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e62af327736b54], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:12:24.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-7mzdh" for this suite.
Jan  2 20:12:30.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:12:30.423: INFO: namespace: e2e-tests-sched-pred-7mzdh, resource: bindings, ignored listing per whitelist
Jan  2 20:12:30.756: INFO: namespace e2e-tests-sched-pred-7mzdh deletion completed in 6.464719779s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.693 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:12:30.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 20:12:31.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-w5l5n'
Jan  2 20:12:31.220: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 20:12:31.220: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan  2 20:12:33.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-w5l5n'
Jan  2 20:12:33.845: INFO: stderr: ""
Jan  2 20:12:33.845: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:12:33.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-w5l5n" for this suite.
Jan  2 20:12:41.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:12:42.157: INFO: namespace: e2e-tests-kubectl-w5l5n, resource: bindings, ignored listing per whitelist
Jan  2 20:12:42.250: INFO: namespace e2e-tests-kubectl-w5l5n deletion completed in 8.364948037s

• [SLOW TEST:11.494 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:12:42.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  2 20:12:52.645: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-3b841705-2d9c-11ea-814c-0242ac110005,GenerateName:,Namespace:e2e-tests-events-rkrq7,SelfLink:/api/v1/namespaces/e2e-tests-events-rkrq7/pods/send-events-3b841705-2d9c-11ea-814c-0242ac110005,UID:3b8f28a8-2d9c-11ea-a994-fa163e34d433,ResourceVersion:16957437,Generation:0,CreationTimestamp:2020-01-02 20:12:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 488218586,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pttq9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pttq9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pttq9 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002336480} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023364a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:12:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:12:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:12:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:12:42 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-02 20:12:42 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-02 20:12:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://e32311400f17177e44fc4cc22827cf38504786931e43f03d1edcf42d6f0db60f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  2 20:12:54.683: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  2 20:12:56.720: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:12:56.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-rkrq7" for this suite.
Jan  2 20:13:44.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:13:44.924: INFO: namespace: e2e-tests-events-rkrq7, resource: bindings, ignored listing per whitelist
Jan  2 20:13:45.064: INFO: namespace e2e-tests-events-rkrq7 deletion completed in 48.287388316s

• [SLOW TEST:62.813 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:13:45.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-s6njf
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-s6njf to expose endpoints map[]
Jan  2 20:13:45.487: INFO: Get endpoints failed (26.494686ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan  2 20:13:46.512: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-s6njf exposes endpoints map[] (1.05191436s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-s6njf
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-s6njf to expose endpoints map[pod1:[80]]
Jan  2 20:13:51.772: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.172336059s elapsed, will retry)
Jan  2 20:13:57.556: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-s6njf exposes endpoints map[pod1:[80]] (10.957044612s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-s6njf
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-s6njf to expose endpoints map[pod1:[80] pod2:[80]]
Jan  2 20:14:02.201: INFO: Unexpected endpoints: found map[61b128c0-2d9c-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.581367113s elapsed, will retry)
Jan  2 20:14:07.860: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-s6njf exposes endpoints map[pod1:[80] pod2:[80]] (10.240625383s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-s6njf
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-s6njf to expose endpoints map[pod2:[80]]
Jan  2 20:14:09.620: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-s6njf exposes endpoints map[pod2:[80]] (1.749475667s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-s6njf
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-s6njf to expose endpoints map[]
Jan  2 20:14:10.862: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-s6njf exposes endpoints map[] (1.214534587s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:14:11.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-s6njf" for this suite.
Jan  2 20:14:35.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:14:35.571: INFO: namespace: e2e-tests-services-s6njf, resource: bindings, ignored listing per whitelist
Jan  2 20:14:35.680: INFO: namespace e2e-tests-services-s6njf deletion completed in 24.371032096s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:50.616 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:14:35.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  2 20:14:36.048: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  2 20:14:41.064: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:14:42.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-grb2x" for this suite.
Jan  2 20:14:56.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:14:56.303: INFO: namespace: e2e-tests-replication-controller-grb2x, resource: bindings, ignored listing per whitelist
Jan  2 20:14:56.325: INFO: namespace e2e-tests-replication-controller-grb2x deletion completed in 13.436319448s

• [SLOW TEST:20.645 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:14:56.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-8c6d1138-2d9c-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 20:14:58.462: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c80bf46-2d9c-11ea-814c-0242ac110005" in namespace "e2e-tests-configmap-9n7tn" to be "success or failure"
Jan  2 20:14:58.540: INFO: Pod "pod-configmaps-8c80bf46-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 77.512573ms
Jan  2 20:15:00.802: INFO: Pod "pod-configmaps-8c80bf46-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34000349s
Jan  2 20:15:02.812: INFO: Pod "pod-configmaps-8c80bf46-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350114241s
Jan  2 20:15:06.005: INFO: Pod "pod-configmaps-8c80bf46-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.542537973s
Jan  2 20:15:08.020: INFO: Pod "pod-configmaps-8c80bf46-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.557883672s
Jan  2 20:15:10.277: INFO: Pod "pod-configmaps-8c80bf46-2d9c-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.814794252s
STEP: Saw pod success
Jan  2 20:15:10.277: INFO: Pod "pod-configmaps-8c80bf46-2d9c-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:15:10.306: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8c80bf46-2d9c-11ea-814c-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 20:15:10.798: INFO: Waiting for pod pod-configmaps-8c80bf46-2d9c-11ea-814c-0242ac110005 to disappear
Jan  2 20:15:10.820: INFO: Pod pod-configmaps-8c80bf46-2d9c-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:15:10.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9n7tn" for this suite.
Jan  2 20:15:16.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:15:17.023: INFO: namespace: e2e-tests-configmap-9n7tn, resource: bindings, ignored listing per whitelist
Jan  2 20:15:17.087: INFO: namespace e2e-tests-configmap-9n7tn deletion completed in 6.193744925s

• [SLOW TEST:20.761 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:15:17.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan  2 20:15:17.475: INFO: Waiting up to 5m0s for pod "client-containers-97e12433-2d9c-11ea-814c-0242ac110005" in namespace "e2e-tests-containers-v8c6d" to be "success or failure"
Jan  2 20:15:17.494: INFO: Pod "client-containers-97e12433-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.231621ms
Jan  2 20:15:19.924: INFO: Pod "client-containers-97e12433-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.448744551s
Jan  2 20:15:21.941: INFO: Pod "client-containers-97e12433-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.465976349s
Jan  2 20:15:23.970: INFO: Pod "client-containers-97e12433-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.494795956s
Jan  2 20:15:26.519: INFO: Pod "client-containers-97e12433-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.043886191s
Jan  2 20:15:28.972: INFO: Pod "client-containers-97e12433-2d9c-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.496450372s
STEP: Saw pod success
Jan  2 20:15:28.972: INFO: Pod "client-containers-97e12433-2d9c-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:15:28.983: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-97e12433-2d9c-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 20:15:29.264: INFO: Waiting for pod client-containers-97e12433-2d9c-11ea-814c-0242ac110005 to disappear
Jan  2 20:15:29.273: INFO: Pod client-containers-97e12433-2d9c-11ea-814c-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:15:29.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-v8c6d" for this suite.
Jan  2 20:15:35.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:15:35.687: INFO: namespace: e2e-tests-containers-v8c6d, resource: bindings, ignored listing per whitelist
Jan  2 20:15:35.703: INFO: namespace e2e-tests-containers-v8c6d deletion completed in 6.419335783s

• [SLOW TEST:18.615 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:15:35.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  2 20:15:58.299: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 20:15:58.389: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 20:16:00.390: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 20:16:00.411: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 20:16:02.390: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 20:16:02.405: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 20:16:04.390: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 20:16:04.408: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 20:16:06.390: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 20:16:06.407: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:16:06.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-sbrk5" for this suite.
Jan  2 20:16:30.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:16:30.880: INFO: namespace: e2e-tests-container-lifecycle-hook-sbrk5, resource: bindings, ignored listing per whitelist
Jan  2 20:16:30.950: INFO: namespace e2e-tests-container-lifecycle-hook-sbrk5 deletion completed in 24.387578761s

• [SLOW TEST:55.246 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:16:30.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-c3ddcffc-2d9c-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 20:16:31.550: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c3def5f0-2d9c-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-tsh4s" to be "success or failure"
Jan  2 20:16:31.565: INFO: Pod "pod-projected-configmaps-c3def5f0-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.606086ms
Jan  2 20:16:33.585: INFO: Pod "pod-projected-configmaps-c3def5f0-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035022891s
Jan  2 20:16:35.600: INFO: Pod "pod-projected-configmaps-c3def5f0-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050206473s
Jan  2 20:16:38.209: INFO: Pod "pod-projected-configmaps-c3def5f0-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.65954789s
Jan  2 20:16:40.234: INFO: Pod "pod-projected-configmaps-c3def5f0-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.684205751s
Jan  2 20:16:42.249: INFO: Pod "pod-projected-configmaps-c3def5f0-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.699460159s
Jan  2 20:16:44.727: INFO: Pod "pod-projected-configmaps-c3def5f0-2d9c-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.177260003s
STEP: Saw pod success
Jan  2 20:16:44.727: INFO: Pod "pod-projected-configmaps-c3def5f0-2d9c-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:16:44.740: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c3def5f0-2d9c-11ea-814c-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 20:16:45.241: INFO: Waiting for pod pod-projected-configmaps-c3def5f0-2d9c-11ea-814c-0242ac110005 to disappear
Jan  2 20:16:45.261: INFO: Pod pod-projected-configmaps-c3def5f0-2d9c-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:16:45.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tsh4s" for this suite.
Jan  2 20:16:51.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:16:51.534: INFO: namespace: e2e-tests-projected-tsh4s, resource: bindings, ignored listing per whitelist
Jan  2 20:16:51.605: INFO: namespace e2e-tests-projected-tsh4s deletion completed in 6.33431929s

• [SLOW TEST:20.654 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:16:51.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-zv678/secret-test-d01a4168-2d9c-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 20:16:51.798: INFO: Waiting up to 5m0s for pod "pod-configmaps-d01b4e64-2d9c-11ea-814c-0242ac110005" in namespace "e2e-tests-secrets-zv678" to be "success or failure"
Jan  2 20:16:51.805: INFO: Pod "pod-configmaps-d01b4e64-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.610314ms
Jan  2 20:16:54.054: INFO: Pod "pod-configmaps-d01b4e64-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25622052s
Jan  2 20:16:56.081: INFO: Pod "pod-configmaps-d01b4e64-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.28297783s
Jan  2 20:16:58.214: INFO: Pod "pod-configmaps-d01b4e64-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415785238s
Jan  2 20:17:00.246: INFO: Pod "pod-configmaps-d01b4e64-2d9c-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.448276317s
Jan  2 20:17:02.260: INFO: Pod "pod-configmaps-d01b4e64-2d9c-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.462558386s
STEP: Saw pod success
Jan  2 20:17:02.261: INFO: Pod "pod-configmaps-d01b4e64-2d9c-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:17:02.442: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d01b4e64-2d9c-11ea-814c-0242ac110005 container env-test: 
STEP: delete the pod
Jan  2 20:17:03.840: INFO: Waiting for pod pod-configmaps-d01b4e64-2d9c-11ea-814c-0242ac110005 to disappear
Jan  2 20:17:03.865: INFO: Pod pod-configmaps-d01b4e64-2d9c-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:17:03.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zv678" for this suite.
Jan  2 20:17:10.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:17:10.178: INFO: namespace: e2e-tests-secrets-zv678, resource: bindings, ignored listing per whitelist
Jan  2 20:17:10.229: INFO: namespace e2e-tests-secrets-zv678 deletion completed in 6.33899567s

• [SLOW TEST:18.624 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:17:10.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:17:20.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-n7l5q" for this suite.
Jan  2 20:18:14.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:18:14.905: INFO: namespace: e2e-tests-kubelet-test-n7l5q, resource: bindings, ignored listing per whitelist
Jan  2 20:18:15.009: INFO: namespace e2e-tests-kubelet-test-n7l5q deletion completed in 54.200653136s

• [SLOW TEST:64.780 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:18:15.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-sl28l
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 20:18:15.215: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 20:18:53.601: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-sl28l PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 20:18:53.601: INFO: >>> kubeConfig: /root/.kube/config
I0102 20:18:53.796811       8 log.go:172] (0xc0006f16b0) (0xc0023e8460) Create stream
I0102 20:18:53.797061       8 log.go:172] (0xc0006f16b0) (0xc0023e8460) Stream added, broadcasting: 1
I0102 20:18:53.859514       8 log.go:172] (0xc0006f16b0) Reply frame received for 1
I0102 20:18:53.860337       8 log.go:172] (0xc0006f16b0) (0xc001ddc000) Create stream
I0102 20:18:53.860440       8 log.go:172] (0xc0006f16b0) (0xc001ddc000) Stream added, broadcasting: 3
I0102 20:18:53.870179       8 log.go:172] (0xc0006f16b0) Reply frame received for 3
I0102 20:18:53.870541       8 log.go:172] (0xc0006f16b0) (0xc0023e8500) Create stream
I0102 20:18:53.870623       8 log.go:172] (0xc0006f16b0) (0xc0023e8500) Stream added, broadcasting: 5
I0102 20:18:53.873921       8 log.go:172] (0xc0006f16b0) Reply frame received for 5
I0102 20:18:54.289738       8 log.go:172] (0xc0006f16b0) Data frame received for 3
I0102 20:18:54.289860       8 log.go:172] (0xc001ddc000) (3) Data frame handling
I0102 20:18:54.289883       8 log.go:172] (0xc001ddc000) (3) Data frame sent
I0102 20:18:54.544112       8 log.go:172] (0xc0006f16b0) Data frame received for 1
I0102 20:18:54.544292       8 log.go:172] (0xc0006f16b0) (0xc001ddc000) Stream removed, broadcasting: 3
I0102 20:18:54.544429       8 log.go:172] (0xc0023e8460) (1) Data frame handling
I0102 20:18:54.544481       8 log.go:172] (0xc0023e8460) (1) Data frame sent
I0102 20:18:54.544786       8 log.go:172] (0xc0006f16b0) (0xc0023e8460) Stream removed, broadcasting: 1
I0102 20:18:54.544881       8 log.go:172] (0xc0006f16b0) (0xc0023e8500) Stream removed, broadcasting: 5
I0102 20:18:54.544940       8 log.go:172] (0xc0006f16b0) Go away received
I0102 20:18:54.545260       8 log.go:172] (0xc0006f16b0) (0xc0023e8460) Stream removed, broadcasting: 1
I0102 20:18:54.545307       8 log.go:172] (0xc0006f16b0) (0xc001ddc000) Stream removed, broadcasting: 3
I0102 20:18:54.545343       8 log.go:172] (0xc0006f16b0) (0xc0023e8500) Stream removed, broadcasting: 5
Jan  2 20:18:54.545: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:18:54.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-sl28l" for this suite.
Jan  2 20:19:18.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:19:18.789: INFO: namespace: e2e-tests-pod-network-test-sl28l, resource: bindings, ignored listing per whitelist
Jan  2 20:19:18.868: INFO: namespace e2e-tests-pod-network-test-sl28l deletion completed in 24.264731442s

• [SLOW TEST:63.859 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:19:18.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  2 20:19:19.072: INFO: Waiting up to 5m0s for pod "pod-27e32a2b-2d9d-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-lcd7w" to be "success or failure"
Jan  2 20:19:19.081: INFO: Pod "pod-27e32a2b-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.863815ms
Jan  2 20:19:21.163: INFO: Pod "pod-27e32a2b-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090731555s
Jan  2 20:19:23.183: INFO: Pod "pod-27e32a2b-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110734673s
Jan  2 20:19:25.890: INFO: Pod "pod-27e32a2b-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.817645266s
Jan  2 20:19:27.907: INFO: Pod "pod-27e32a2b-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.835138089s
Jan  2 20:19:30.047: INFO: Pod "pod-27e32a2b-2d9d-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.975284057s
STEP: Saw pod success
Jan  2 20:19:30.048: INFO: Pod "pod-27e32a2b-2d9d-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:19:30.091: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-27e32a2b-2d9d-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 20:19:30.287: INFO: Waiting for pod pod-27e32a2b-2d9d-11ea-814c-0242ac110005 to disappear
Jan  2 20:19:30.442: INFO: Pod pod-27e32a2b-2d9d-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:19:30.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lcd7w" for this suite.
Jan  2 20:19:38.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:19:38.686: INFO: namespace: e2e-tests-emptydir-lcd7w, resource: bindings, ignored listing per whitelist
Jan  2 20:19:38.709: INFO: namespace e2e-tests-emptydir-lcd7w deletion completed in 8.234212003s

• [SLOW TEST:19.840 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:19:38.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  2 20:19:39.072: INFO: Waiting up to 5m0s for pod "pod-33c69065-2d9d-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-fwlfk" to be "success or failure"
Jan  2 20:19:39.082: INFO: Pod "pod-33c69065-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.500082ms
Jan  2 20:19:41.120: INFO: Pod "pod-33c69065-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048298329s
Jan  2 20:19:43.135: INFO: Pod "pod-33c69065-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063054208s
Jan  2 20:19:45.500: INFO: Pod "pod-33c69065-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427834361s
Jan  2 20:19:47.519: INFO: Pod "pod-33c69065-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.447645795s
Jan  2 20:19:49.535: INFO: Pod "pod-33c69065-2d9d-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.463396742s
STEP: Saw pod success
Jan  2 20:19:49.535: INFO: Pod "pod-33c69065-2d9d-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:19:49.548: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-33c69065-2d9d-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 20:19:50.834: INFO: Waiting for pod pod-33c69065-2d9d-11ea-814c-0242ac110005 to disappear
Jan  2 20:19:50.851: INFO: Pod pod-33c69065-2d9d-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:19:50.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fwlfk" for this suite.
Jan  2 20:19:57.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:19:57.165: INFO: namespace: e2e-tests-emptydir-fwlfk, resource: bindings, ignored listing per whitelist
Jan  2 20:19:57.240: INFO: namespace e2e-tests-emptydir-fwlfk deletion completed in 6.226037325s

• [SLOW TEST:18.531 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:19:57.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:20:07.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-khwmc" for this suite.
Jan  2 20:20:51.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:20:51.701: INFO: namespace: e2e-tests-kubelet-test-khwmc, resource: bindings, ignored listing per whitelist
Jan  2 20:20:51.802: INFO: namespace e2e-tests-kubelet-test-khwmc deletion completed in 44.170000896s

• [SLOW TEST:54.560 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:20:51.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 20:20:52.029: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f4a1fce-2d9d-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-m95r7" to be "success or failure"
Jan  2 20:20:52.101: INFO: Pod "downwardapi-volume-5f4a1fce-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 71.880078ms
Jan  2 20:20:54.378: INFO: Pod "downwardapi-volume-5f4a1fce-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.349155659s
Jan  2 20:20:56.397: INFO: Pod "downwardapi-volume-5f4a1fce-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367735329s
Jan  2 20:20:58.634: INFO: Pod "downwardapi-volume-5f4a1fce-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.604970543s
Jan  2 20:21:01.379: INFO: Pod "downwardapi-volume-5f4a1fce-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.349303887s
Jan  2 20:21:03.397: INFO: Pod "downwardapi-volume-5f4a1fce-2d9d-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.368195486s
STEP: Saw pod success
Jan  2 20:21:03.398: INFO: Pod "downwardapi-volume-5f4a1fce-2d9d-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:21:03.406: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5f4a1fce-2d9d-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 20:21:04.053: INFO: Waiting for pod downwardapi-volume-5f4a1fce-2d9d-11ea-814c-0242ac110005 to disappear
Jan  2 20:21:04.074: INFO: Pod downwardapi-volume-5f4a1fce-2d9d-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:21:04.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-m95r7" for this suite.
Jan  2 20:21:10.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:21:10.387: INFO: namespace: e2e-tests-downward-api-m95r7, resource: bindings, ignored listing per whitelist
Jan  2 20:21:10.581: INFO: namespace e2e-tests-downward-api-m95r7 deletion completed in 6.463211007s

• [SLOW TEST:18.779 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:21:10.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  2 20:21:11.000: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:21:26.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-cnkbj" for this suite.
Jan  2 20:21:33.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:21:33.184: INFO: namespace: e2e-tests-init-container-cnkbj, resource: bindings, ignored listing per whitelist
Jan  2 20:21:33.232: INFO: namespace e2e-tests-init-container-cnkbj deletion completed in 6.241903599s

• [SLOW TEST:22.651 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:21:33.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 20:21:33.419: INFO: Creating deployment "nginx-deployment"
Jan  2 20:21:33.428: INFO: Waiting for observed generation 1
Jan  2 20:21:36.498: INFO: Waiting for all required pods to come up
Jan  2 20:21:36.558: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  2 20:22:17.105: INFO: Waiting for deployment "nginx-deployment" to complete
Jan  2 20:22:17.132: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan  2 20:22:17.160: INFO: Updating deployment nginx-deployment
Jan  2 20:22:17.160: INFO: Waiting for observed generation 2
Jan  2 20:22:20.480: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  2 20:22:20.499: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  2 20:22:20.516: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  2 20:22:21.886: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  2 20:22:21.886: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  2 20:22:21.905: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  2 20:22:22.277: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan  2 20:22:22.277: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan  2 20:22:22.646: INFO: Updating deployment nginx-deployment
Jan  2 20:22:22.646: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan  2 20:22:22.705: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  2 20:22:24.955: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 20:22:27.299: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-fsltl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fsltl/deployments/nginx-deployment,UID:77fa2f19-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958786,Generation:3,CreationTimestamp:2020-01-02 20:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-02 20:22:18 +0000 UTC 2020-01-02 20:21:33 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-02 20:22:23 +0000 UTC 2020-01-02 20:22:23 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan  2 20:22:27.352: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-fsltl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fsltl/replicasets/nginx-deployment-5c98f8fb5,UID:920efd52-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958793,Generation:3,CreationTimestamp:2020-01-02 20:22:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 77fa2f19-2d9d-11ea-a994-fa163e34d433 0xc00253d587 0xc00253d588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 20:22:27.352: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan  2 20:22:27.353: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-fsltl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fsltl/replicasets/nginx-deployment-85ddf47c5d,UID:77fd5df0-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958784,Generation:3,CreationTimestamp:2020-01-02 20:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 77fa2f19-2d9d-11ea-a994-fa163e34d433 0xc00253d647 0xc00253d648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan  2 20:22:27.944: INFO: Pod "nginx-deployment-5c98f8fb5-2bx2t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2bx2t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-2bx2t,UID:927ca290-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958715,Generation:0,CreationTimestamp:2020-01-02 20:22:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc00253dfc7 0xc00253dfc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002648030} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002648050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 20:22:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.944: INFO: Pod "nginx-deployment-5c98f8fb5-2qtds" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2qtds,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-2qtds,UID:9225264a-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958714,Generation:0,CreationTimestamp:2020-01-02 20:22:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc002648117 0xc002648118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002648180} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026481a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 20:22:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.945: INFO: Pod "nginx-deployment-5c98f8fb5-5khld" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5khld,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-5khld,UID:968cf8af-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958775,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc002648267 0xc002648268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026482d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026482f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.945: INFO: Pod "nginx-deployment-5c98f8fb5-7zlpc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7zlpc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-7zlpc,UID:968cf813-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958772,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc002648367 0xc002648368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026483d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026483f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.946: INFO: Pod "nginx-deployment-5c98f8fb5-dbdjt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dbdjt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-dbdjt,UID:967478fb-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958764,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc002648467 0xc002648468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026484d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026484f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.946: INFO: Pod "nginx-deployment-5c98f8fb5-f9gjs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f9gjs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-f9gjs,UID:96737c08-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958761,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc002648567 0xc002648568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026485d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026485f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.946: INFO: Pod "nginx-deployment-5c98f8fb5-fqc8f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fqc8f,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-fqc8f,UID:9224b89b-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958712,Generation:0,CreationTimestamp:2020-01-02 20:22:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc002648667 0xc002648668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026486d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026486f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 20:22:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.947: INFO: Pod "nginx-deployment-5c98f8fb5-frwbq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-frwbq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-frwbq,UID:9216b7e6-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958710,Generation:0,CreationTimestamp:2020-01-02 20:22:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc0026487b7 0xc0026487b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002648820} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002648840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 20:22:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.947: INFO: Pod "nginx-deployment-5c98f8fb5-nfr62" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nfr62,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-nfr62,UID:9296d82d-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958718,Generation:0,CreationTimestamp:2020-01-02 20:22:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc002648907 0xc002648908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002648970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002648990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 20:22:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.948: INFO: Pod "nginx-deployment-5c98f8fb5-nnrcb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nnrcb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-nnrcb,UID:96434274-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958742,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc002648a57 0xc002648a58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002648ac0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002648ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.948: INFO: Pod "nginx-deployment-5c98f8fb5-rsgnl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rsgnl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-rsgnl,UID:968ce716-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958773,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc002648b57 0xc002648b58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002648bc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002648be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.949: INFO: Pod "nginx-deployment-5c98f8fb5-s4chq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-s4chq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-s4chq,UID:968d02fa-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958774,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc002648c57 0xc002648c58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002648cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002648ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.949: INFO: Pod "nginx-deployment-5c98f8fb5-s5gq8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-s5gq8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-5c98f8fb5-s5gq8,UID:9697f81e-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958780,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 920efd52-2d9d-11ea-a994-fa163e34d433 0xc002648d57 0xc002648d58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002648dc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002648de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.950: INFO: Pod "nginx-deployment-85ddf47c5d-2gvwf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2gvwf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-2gvwf,UID:78368264-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958633,Generation:0,CreationTimestamp:2020-01-02 20:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002648e57 0xc002648e58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002648ec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002648ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:34 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-02 20:21:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 20:22:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ada6e08017eb133c202a3518eee85295c4acd1431c32d92a8f8e3f4cfadd653d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.950: INFO: Pod "nginx-deployment-85ddf47c5d-8tsnh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8tsnh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-8tsnh,UID:966c0be7-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958755,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002648fa7 0xc002648fa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002649010} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002649030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.950: INFO: Pod "nginx-deployment-85ddf47c5d-9crms" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9crms,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-9crms,UID:96381597-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958738,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc0026490a7 0xc0026490a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002649110} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002649130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.950: INFO: Pod "nginx-deployment-85ddf47c5d-hh2lv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hh2lv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-hh2lv,UID:781dc0e3-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958654,Generation:0,CreationTimestamp:2020-01-02 20:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc0026491a7 0xc0026491a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002649210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002649230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-02 20:21:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 20:22:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b88877f77792c333b3901e698162a08939559a6390b3f6b5dd6e637ebb0d1e50}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.951: INFO: Pod "nginx-deployment-85ddf47c5d-j6mpq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j6mpq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-j6mpq,UID:781549da-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958641,Generation:0,CreationTimestamp:2020-01-02 20:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc0026492f7 0xc0026492f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002649360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002649380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-02 20:21:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 20:22:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9d4ebba58c5cf08703d147fa0f8bf9c764ff27f470a92c4c7b611a608d8bc40e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.951: INFO: Pod "nginx-deployment-85ddf47c5d-k4gb2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-k4gb2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-k4gb2,UID:966aa47b-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958752,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002649447 0xc002649448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026494b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026494d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.951: INFO: Pod "nginx-deployment-85ddf47c5d-lfn8n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lfn8n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-lfn8n,UID:9674547b-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958765,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002649547 0xc002649548}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026495b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026495d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.952: INFO: Pod "nginx-deployment-85ddf47c5d-mnlp6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mnlp6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-mnlp6,UID:96744e53-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958758,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002649647 0xc002649648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026496b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026496d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.952: INFO: Pod "nginx-deployment-85ddf47c5d-nnkwc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nnkwc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-nnkwc,UID:966c4034-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958747,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002649747 0xc002649748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026497b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026497d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.952: INFO: Pod "nginx-deployment-85ddf47c5d-pn4zx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pn4zx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-pn4zx,UID:78377cb3-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958647,Generation:0,CreationTimestamp:2020-01-02 20:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002649847 0xc002649848}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026498b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026498d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:34 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-02 20:21:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 20:22:10 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://83a0540db0d4d4646ab99dd9c0cf73403f325b3f4bfb18355b9db2e8333654ac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.952: INFO: Pod "nginx-deployment-85ddf47c5d-qbcd5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qbcd5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-qbcd5,UID:9674bb81-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958762,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002649997 0xc002649998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002649a00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002649a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.953: INFO: Pod "nginx-deployment-85ddf47c5d-qn47q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qn47q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-qn47q,UID:966c6940-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958756,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002649a97 0xc002649a98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002649b00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002649b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.953: INFO: Pod "nginx-deployment-85ddf47c5d-s4jv5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s4jv5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-s4jv5,UID:781e1369-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958619,Generation:0,CreationTimestamp:2020-01-02 20:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002649b97 0xc002649b98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002649c00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002649c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-02 20:21:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 20:22:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f18874843e53a59a509d663ea8852301fed4c518d9f369b79991711e71a53dc8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.954: INFO: Pod "nginx-deployment-85ddf47c5d-sbv9v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sbv9v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-sbv9v,UID:781f04dc-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958627,Generation:0,CreationTimestamp:2020-01-02 20:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002649ce7 0xc002649ce8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002649d50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002649d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-02 20:21:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 20:22:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9d54ea31c4e4d57d758dffe0a6827f45f3e7e4b997fa523a48e5fd6919902b91}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.954: INFO: Pod "nginx-deployment-85ddf47c5d-sll7v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sll7v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-sll7v,UID:95f345a9-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958791,Generation:0,CreationTimestamp:2020-01-02 20:22:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002649e37 0xc002649e38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002649ea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002649ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 20:22:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.954: INFO: Pod "nginx-deployment-85ddf47c5d-tv2gm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tv2gm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-tv2gm,UID:78103348-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958638,Generation:0,CreationTimestamp:2020-01-02 20:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc002649f77 0xc002649f78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002649fe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00265e000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-02 20:21:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 20:22:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6ed5fdb91fa3eb41338cc47af32e8ab5f6bfedaa7f022474ef6d33d5354aaba7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.955: INFO: Pod "nginx-deployment-85ddf47c5d-vf7rm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vf7rm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-vf7rm,UID:78363310-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958650,Generation:0,CreationTimestamp:2020-01-02 20:21:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc00265e0c7 0xc00265e0c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00265e130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00265e150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:21:34 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-02 20:21:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 20:22:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://62b044bdbb2f9502e8872bb35f4b3ce5092a38e0ae997a8159b1a984cd608434}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.955: INFO: Pod "nginx-deployment-85ddf47c5d-wx5sr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wx5sr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-wx5sr,UID:963792c3-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958798,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc00265e217 0xc00265e218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00265e280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00265e2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 20:22:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.955: INFO: Pod "nginx-deployment-85ddf47c5d-z9n5d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z9n5d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-z9n5d,UID:967534d9-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958767,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc00265e357 0xc00265e358}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00265e3c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00265e3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 20:22:27.955: INFO: Pod "nginx-deployment-85ddf47c5d-ztrzz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ztrzz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fsltl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsltl/pods/nginx-deployment-85ddf47c5d-ztrzz,UID:9674a37c-2d9d-11ea-a994-fa163e34d433,ResourceVersion:16958757,Generation:0,CreationTimestamp:2020-01-02 20:22:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 77fd5df0-2d9d-11ea-a994-fa163e34d433 0xc00265e457 0xc00265e458}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jx224 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jx224,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jx224 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00265e4c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00265e4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:22:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:22:27.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-fsltl" for this suite.
Jan  2 20:23:58.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:23:58.819: INFO: namespace: e2e-tests-deployment-fsltl, resource: bindings, ignored listing per whitelist
Jan  2 20:23:58.847: INFO: namespace e2e-tests-deployment-fsltl deletion completed in 1m30.238994474s

• [SLOW TEST:145.614 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:23:58.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 20:24:40.417: INFO: Waiting up to 5m0s for pod "client-envvars-e7678fde-2d9d-11ea-814c-0242ac110005" in namespace "e2e-tests-pods-8tdgs" to be "success or failure"
Jan  2 20:24:40.589: INFO: Pod "client-envvars-e7678fde-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 171.847674ms
Jan  2 20:24:42.633: INFO: Pod "client-envvars-e7678fde-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216561441s
Jan  2 20:24:44.640: INFO: Pod "client-envvars-e7678fde-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222971488s
Jan  2 20:24:47.040: INFO: Pod "client-envvars-e7678fde-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623364695s
Jan  2 20:24:49.051: INFO: Pod "client-envvars-e7678fde-2d9d-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.633783266s
Jan  2 20:24:51.477: INFO: Pod "client-envvars-e7678fde-2d9d-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.060651202s
STEP: Saw pod success
Jan  2 20:24:51.478: INFO: Pod "client-envvars-e7678fde-2d9d-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:24:51.488: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-e7678fde-2d9d-11ea-814c-0242ac110005 container env3cont: 
STEP: delete the pod
Jan  2 20:24:52.069: INFO: Waiting for pod client-envvars-e7678fde-2d9d-11ea-814c-0242ac110005 to disappear
Jan  2 20:24:52.106: INFO: Pod client-envvars-e7678fde-2d9d-11ea-814c-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:24:52.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8tdgs" for this suite.
Jan  2 20:25:34.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:25:34.414: INFO: namespace: e2e-tests-pods-8tdgs, resource: bindings, ignored listing per whitelist
Jan  2 20:25:34.555: INFO: namespace e2e-tests-pods-8tdgs deletion completed in 42.430973s

• [SLOW TEST:95.708 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:25:34.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 20:25:34.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07e90cef-2d9e-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-4hgp7" to be "success or failure"
Jan  2 20:25:34.987: INFO: Pod "downwardapi-volume-07e90cef-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 64.694039ms
Jan  2 20:25:37.003: INFO: Pod "downwardapi-volume-07e90cef-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080345097s
Jan  2 20:25:39.026: INFO: Pod "downwardapi-volume-07e90cef-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103232129s
Jan  2 20:25:42.172: INFO: Pod "downwardapi-volume-07e90cef-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.249646483s
Jan  2 20:25:44.298: INFO: Pod "downwardapi-volume-07e90cef-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.375892311s
Jan  2 20:25:46.324: INFO: Pod "downwardapi-volume-07e90cef-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.401742717s
Jan  2 20:25:48.357: INFO: Pod "downwardapi-volume-07e90cef-2d9e-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.43477438s
STEP: Saw pod success
Jan  2 20:25:48.357: INFO: Pod "downwardapi-volume-07e90cef-2d9e-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:25:48.385: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-07e90cef-2d9e-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 20:25:48.573: INFO: Waiting for pod downwardapi-volume-07e90cef-2d9e-11ea-814c-0242ac110005 to disappear
Jan  2 20:25:48.580: INFO: Pod downwardapi-volume-07e90cef-2d9e-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:25:48.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4hgp7" for this suite.
Jan  2 20:25:56.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:25:56.906: INFO: namespace: e2e-tests-projected-4hgp7, resource: bindings, ignored listing per whitelist
Jan  2 20:25:57.067: INFO: namespace e2e-tests-projected-4hgp7 deletion completed in 8.480386295s

• [SLOW TEST:22.511 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:25:57.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:26:09.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-ll64h" for this suite.
Jan  2 20:26:15.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:26:15.457: INFO: namespace: e2e-tests-kubelet-test-ll64h, resource: bindings, ignored listing per whitelist
Jan  2 20:26:15.465: INFO: namespace e2e-tests-kubelet-test-ll64h deletion completed in 6.149089809s

• [SLOW TEST:18.398 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:26:15.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan  2 20:26:15.683: INFO: Waiting up to 5m0s for pod "var-expansion-2035cf7b-2d9e-11ea-814c-0242ac110005" in namespace "e2e-tests-var-expansion-p4l7w" to be "success or failure"
Jan  2 20:26:15.776: INFO: Pod "var-expansion-2035cf7b-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 93.346954ms
Jan  2 20:26:17.816: INFO: Pod "var-expansion-2035cf7b-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133805654s
Jan  2 20:26:19.855: INFO: Pod "var-expansion-2035cf7b-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171957516s
Jan  2 20:26:21.876: INFO: Pod "var-expansion-2035cf7b-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19338243s
Jan  2 20:26:24.056: INFO: Pod "var-expansion-2035cf7b-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.373467943s
Jan  2 20:26:26.112: INFO: Pod "var-expansion-2035cf7b-2d9e-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.429120177s
STEP: Saw pod success
Jan  2 20:26:26.112: INFO: Pod "var-expansion-2035cf7b-2d9e-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:26:26.119: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-2035cf7b-2d9e-11ea-814c-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 20:26:26.650: INFO: Waiting for pod var-expansion-2035cf7b-2d9e-11ea-814c-0242ac110005 to disappear
Jan  2 20:26:26.670: INFO: Pod var-expansion-2035cf7b-2d9e-11ea-814c-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:26:26.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-p4l7w" for this suite.
Jan  2 20:26:32.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:26:32.830: INFO: namespace: e2e-tests-var-expansion-p4l7w, resource: bindings, ignored listing per whitelist
Jan  2 20:26:32.904: INFO: namespace e2e-tests-var-expansion-p4l7w deletion completed in 6.224904782s

• [SLOW TEST:17.439 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:26:32.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 20:26:33.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9mv7n'
Jan  2 20:26:35.295: INFO: stderr: ""
Jan  2 20:26:35.295: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  2 20:26:45.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9mv7n -o json'
Jan  2 20:26:45.524: INFO: stderr: ""
Jan  2 20:26:45.524: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-02T20:26:35Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-9mv7n\",\n        \"resourceVersion\": \"16959384\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-9mv7n/pods/e2e-test-nginx-pod\",\n        \"uid\": \"2be34347-2d9e-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-cpw7n\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-cpw7n\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-cpw7n\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T20:26:35Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T20:26:44Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T20:26:44Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T20:26:35Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://de0ec098e36123ac398fd82edbe45ff773c4fb9ca4470d5cfb541f8486d1f339\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-02T20:26:43Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-02T20:26:35Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  2 20:26:45.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-9mv7n'
Jan  2 20:26:46.052: INFO: stderr: ""
Jan  2 20:26:46.052: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan  2 20:26:46.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9mv7n'
Jan  2 20:26:54.887: INFO: stderr: ""
Jan  2 20:26:54.888: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:26:54.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9mv7n" for this suite.
Jan  2 20:27:01.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:27:01.048: INFO: namespace: e2e-tests-kubectl-9mv7n, resource: bindings, ignored listing per whitelist
Jan  2 20:27:01.172: INFO: namespace e2e-tests-kubectl-9mv7n deletion completed in 6.26875962s

• [SLOW TEST:28.267 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:27:01.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 20:27:01.334: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.983418ms)
Jan  2 20:27:01.364: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 30.015818ms)
Jan  2 20:27:01.373: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.075017ms)
Jan  2 20:27:01.380: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.032126ms)
Jan  2 20:27:01.387: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.562394ms)
Jan  2 20:27:01.394: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.315475ms)
Jan  2 20:27:01.400: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.7901ms)
Jan  2 20:27:01.405: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.206108ms)
Jan  2 20:27:01.410: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.414595ms)
Jan  2 20:27:01.413: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.740284ms)
Jan  2 20:27:01.418: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.076513ms)
Jan  2 20:27:01.423: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.6485ms)
Jan  2 20:27:01.428: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.997309ms)
Jan  2 20:27:01.433: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.86471ms)
Jan  2 20:27:01.438: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.620169ms)
Jan  2 20:27:01.442: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.309405ms)
Jan  2 20:27:01.447: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.665756ms)
Jan  2 20:27:01.451: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.159003ms)
Jan  2 20:27:01.455: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.960145ms)
Jan  2 20:27:01.459: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.799538ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:27:01.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-l29wm" for this suite.
Jan  2 20:27:07.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:27:07.701: INFO: namespace: e2e-tests-proxy-l29wm, resource: bindings, ignored listing per whitelist
Jan  2 20:27:07.729: INFO: namespace e2e-tests-proxy-l29wm deletion completed in 6.265851974s

• [SLOW TEST:6.557 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:27:07.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan  2 20:27:08.047: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-25474" to be "success or failure"
Jan  2 20:27:08.155: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 107.678878ms
Jan  2 20:27:10.380: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332295974s
Jan  2 20:27:12.392: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344679275s
Jan  2 20:27:14.410: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.362481719s
Jan  2 20:27:16.786: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.738986058s
Jan  2 20:27:18.808: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.760791125s
Jan  2 20:27:20.833: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.78580659s
STEP: Saw pod success
Jan  2 20:27:20.833: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  2 20:27:20.841: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  2 20:27:21.048: INFO: Waiting for pod pod-host-path-test to disappear
Jan  2 20:27:21.060: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:27:21.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-25474" for this suite.
Jan  2 20:27:27.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:27:28.012: INFO: namespace: e2e-tests-hostpath-25474, resource: bindings, ignored listing per whitelist
Jan  2 20:27:28.022: INFO: namespace e2e-tests-hostpath-25474 deletion completed in 6.950334119s

• [SLOW TEST:20.292 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:27:28.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:27:41.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-42jzx" for this suite.
Jan  2 20:28:07.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:28:07.793: INFO: namespace: e2e-tests-replication-controller-42jzx, resource: bindings, ignored listing per whitelist
Jan  2 20:28:07.830: INFO: namespace e2e-tests-replication-controller-42jzx deletion completed in 26.496165762s

• [SLOW TEST:39.808 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:28:07.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:28:15.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-kjqpj" for this suite.
Jan  2 20:28:21.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:28:21.210: INFO: namespace: e2e-tests-namespaces-kjqpj, resource: bindings, ignored listing per whitelist
Jan  2 20:28:21.316: INFO: namespace e2e-tests-namespaces-kjqpj deletion completed in 6.261433432s
STEP: Destroying namespace "e2e-tests-nsdeletetest-dh55l" for this suite.
Jan  2 20:28:21.320: INFO: Namespace e2e-tests-nsdeletetest-dh55l was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-k5fhg" for this suite.
Jan  2 20:28:27.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:28:28.032: INFO: namespace: e2e-tests-nsdeletetest-k5fhg, resource: bindings, ignored listing per whitelist
Jan  2 20:28:28.253: INFO: namespace e2e-tests-nsdeletetest-k5fhg deletion completed in 6.932913725s

• [SLOW TEST:20.422 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:28:28.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-t8dc4
Jan  2 20:28:38.820: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-t8dc4
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 20:28:38.844: INFO: Initial restart count of pod liveness-http is 0
Jan  2 20:29:03.094: INFO: Restart count of pod e2e-tests-container-probe-t8dc4/liveness-http is now 1 (24.249440426s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:29:03.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-t8dc4" for this suite.
Jan  2 20:29:09.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:29:09.439: INFO: namespace: e2e-tests-container-probe-t8dc4, resource: bindings, ignored listing per whitelist
Jan  2 20:29:09.528: INFO: namespace e2e-tests-container-probe-t8dc4 deletion completed in 6.279974718s

• [SLOW TEST:41.276 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:29:09.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-87f06f33-2d9e-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 20:29:09.720: INFO: Waiting up to 5m0s for pod "pod-configmaps-87f26d56-2d9e-11ea-814c-0242ac110005" in namespace "e2e-tests-configmap-bjwmv" to be "success or failure"
Jan  2 20:29:09.744: INFO: Pod "pod-configmaps-87f26d56-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.774759ms
Jan  2 20:29:11.761: INFO: Pod "pod-configmaps-87f26d56-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040315471s
Jan  2 20:29:13.780: INFO: Pod "pod-configmaps-87f26d56-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059749848s
Jan  2 20:29:15.867: INFO: Pod "pod-configmaps-87f26d56-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146779554s
Jan  2 20:29:17.906: INFO: Pod "pod-configmaps-87f26d56-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18547927s
Jan  2 20:29:19.921: INFO: Pod "pod-configmaps-87f26d56-2d9e-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.200952042s
STEP: Saw pod success
Jan  2 20:29:19.922: INFO: Pod "pod-configmaps-87f26d56-2d9e-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:29:19.927: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-87f26d56-2d9e-11ea-814c-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 20:29:20.102: INFO: Waiting for pod pod-configmaps-87f26d56-2d9e-11ea-814c-0242ac110005 to disappear
Jan  2 20:29:20.115: INFO: Pod pod-configmaps-87f26d56-2d9e-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:29:20.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bjwmv" for this suite.
Jan  2 20:29:26.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:29:26.239: INFO: namespace: e2e-tests-configmap-bjwmv, resource: bindings, ignored listing per whitelist
Jan  2 20:29:26.372: INFO: namespace e2e-tests-configmap-bjwmv deletion completed in 6.243148851s

• [SLOW TEST:16.843 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:29:26.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  2 20:29:39.361: INFO: Successfully updated pod "pod-update-activedeadlineseconds-921a07dd-2d9e-11ea-814c-0242ac110005"
Jan  2 20:29:39.362: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-921a07dd-2d9e-11ea-814c-0242ac110005" in namespace "e2e-tests-pods-nsb4v" to be "terminated due to deadline exceeded"
Jan  2 20:29:39.432: INFO: Pod "pod-update-activedeadlineseconds-921a07dd-2d9e-11ea-814c-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 69.860554ms
Jan  2 20:29:41.456: INFO: Pod "pod-update-activedeadlineseconds-921a07dd-2d9e-11ea-814c-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.094301025s
Jan  2 20:29:41.456: INFO: Pod "pod-update-activedeadlineseconds-921a07dd-2d9e-11ea-814c-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:29:41.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-nsb4v" for this suite.
Jan  2 20:29:47.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:29:47.689: INFO: namespace: e2e-tests-pods-nsb4v, resource: bindings, ignored listing per whitelist
Jan  2 20:29:47.729: INFO: namespace e2e-tests-pods-nsb4v deletion completed in 6.262479294s

• [SLOW TEST:21.357 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:29:47.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0102 20:29:51.346148       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 20:29:51.346: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:29:51.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-cqr9s" for this suite.
Jan  2 20:29:57.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:29:57.845: INFO: namespace: e2e-tests-gc-cqr9s, resource: bindings, ignored listing per whitelist
Jan  2 20:29:57.929: INFO: namespace e2e-tests-gc-cqr9s deletion completed in 6.213292136s

• [SLOW TEST:10.198 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:29:57.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-a4cd3c04-2d9e-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 20:29:58.144: INFO: Waiting up to 5m0s for pod "pod-configmaps-a4ce7d5c-2d9e-11ea-814c-0242ac110005" in namespace "e2e-tests-configmap-n4n66" to be "success or failure"
Jan  2 20:29:58.163: INFO: Pod "pod-configmaps-a4ce7d5c-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.293023ms
Jan  2 20:30:00.180: INFO: Pod "pod-configmaps-a4ce7d5c-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036431351s
Jan  2 20:30:02.238: INFO: Pod "pod-configmaps-a4ce7d5c-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094625444s
Jan  2 20:30:04.258: INFO: Pod "pod-configmaps-a4ce7d5c-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114665198s
Jan  2 20:30:06.406: INFO: Pod "pod-configmaps-a4ce7d5c-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262556617s
Jan  2 20:30:08.438: INFO: Pod "pod-configmaps-a4ce7d5c-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.294332979s
Jan  2 20:30:10.454: INFO: Pod "pod-configmaps-a4ce7d5c-2d9e-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.309954262s
STEP: Saw pod success
Jan  2 20:30:10.454: INFO: Pod "pod-configmaps-a4ce7d5c-2d9e-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:30:10.464: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a4ce7d5c-2d9e-11ea-814c-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 20:30:10.764: INFO: Waiting for pod pod-configmaps-a4ce7d5c-2d9e-11ea-814c-0242ac110005 to disappear
Jan  2 20:30:10.783: INFO: Pod pod-configmaps-a4ce7d5c-2d9e-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:30:10.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-n4n66" for this suite.
Jan  2 20:30:16.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:30:17.088: INFO: namespace: e2e-tests-configmap-n4n66, resource: bindings, ignored listing per whitelist
Jan  2 20:30:17.093: INFO: namespace e2e-tests-configmap-n4n66 deletion completed in 6.263094134s

• [SLOW TEST:19.165 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:30:17.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:30:27.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-qt6f4" for this suite.
Jan  2 20:30:33.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:30:34.010: INFO: namespace: e2e-tests-emptydir-wrapper-qt6f4, resource: bindings, ignored listing per whitelist
Jan  2 20:30:34.144: INFO: namespace e2e-tests-emptydir-wrapper-qt6f4 deletion completed in 6.422057301s

• [SLOW TEST:17.050 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:30:34.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  2 20:30:34.396: INFO: Waiting up to 5m0s for pod "pod-ba6aab05-2d9e-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-m967h" to be "success or failure"
Jan  2 20:30:34.486: INFO: Pod "pod-ba6aab05-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 89.367966ms
Jan  2 20:30:36.504: INFO: Pod "pod-ba6aab05-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107591422s
Jan  2 20:30:38.537: INFO: Pod "pod-ba6aab05-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141029167s
Jan  2 20:30:40.658: INFO: Pod "pod-ba6aab05-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.261463241s
Jan  2 20:30:42.688: INFO: Pod "pod-ba6aab05-2d9e-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.291646659s
Jan  2 20:30:45.290: INFO: Pod "pod-ba6aab05-2d9e-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.893106858s
STEP: Saw pod success
Jan  2 20:30:45.290: INFO: Pod "pod-ba6aab05-2d9e-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:30:45.312: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ba6aab05-2d9e-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 20:30:45.749: INFO: Waiting for pod pod-ba6aab05-2d9e-11ea-814c-0242ac110005 to disappear
Jan  2 20:30:45.758: INFO: Pod pod-ba6aab05-2d9e-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:30:45.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-m967h" for this suite.
Jan  2 20:30:51.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:30:51.932: INFO: namespace: e2e-tests-emptydir-m967h, resource: bindings, ignored listing per whitelist
Jan  2 20:30:52.083: INFO: namespace e2e-tests-emptydir-m967h deletion completed in 6.318058977s

• [SLOW TEST:17.939 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:30:52.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  2 20:30:52.263: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  2 20:30:52.355: INFO: Waiting for terminating namespaces to be deleted...
Jan  2 20:30:52.359: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  2 20:30:52.374: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 20:30:52.374: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 20:30:52.374: INFO: 	Container coredns ready: true, restart count 0
Jan  2 20:30:52.374: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  2 20:30:52.374: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  2 20:30:52.374: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 20:30:52.374: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  2 20:30:52.374: INFO: 	Container weave ready: true, restart count 0
Jan  2 20:30:52.374: INFO: 	Container weave-npc ready: true, restart count 0
Jan  2 20:30:52.374: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 20:30:52.374: INFO: 	Container coredns ready: true, restart count 0
Jan  2 20:30:52.374: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 20:30:52.374: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-cb3c9708-2d9e-11ea-814c-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-cb3c9708-2d9e-11ea-814c-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-cb3c9708-2d9e-11ea-814c-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:31:16.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-sfmg2" for this suite.
Jan  2 20:31:32.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:31:33.073: INFO: namespace: e2e-tests-sched-pred-sfmg2, resource: bindings, ignored listing per whitelist
Jan  2 20:31:33.228: INFO: namespace e2e-tests-sched-pred-sfmg2 deletion completed in 16.297023663s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:41.144 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:31:33.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-72kl2
I0102 20:31:33.447475       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-72kl2, replica count: 1
I0102 20:31:34.498531       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:31:35.499064       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:31:36.499711       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:31:37.500546       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:31:38.501326       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:31:39.502086       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:31:40.502768       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:31:41.503380       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:31:42.504025       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:31:43.504582       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  2 20:31:43.718: INFO: Created: latency-svc-8bx7s
Jan  2 20:31:43.888: INFO: Got endpoints: latency-svc-8bx7s [283.273891ms]
Jan  2 20:31:44.088: INFO: Created: latency-svc-xqj57
Jan  2 20:31:44.107: INFO: Got endpoints: latency-svc-xqj57 [216.868903ms]
Jan  2 20:31:44.305: INFO: Created: latency-svc-xhwg7
Jan  2 20:31:44.318: INFO: Got endpoints: latency-svc-xhwg7 [429.621444ms]
Jan  2 20:31:44.406: INFO: Created: latency-svc-zvbvq
Jan  2 20:31:44.406: INFO: Got endpoints: latency-svc-zvbvq [517.69045ms]
Jan  2 20:31:44.599: INFO: Created: latency-svc-vz294
Jan  2 20:31:44.646: INFO: Got endpoints: latency-svc-vz294 [756.377363ms]
Jan  2 20:31:44.757: INFO: Created: latency-svc-stvn6
Jan  2 20:31:44.764: INFO: Got endpoints: latency-svc-stvn6 [874.247564ms]
Jan  2 20:31:45.029: INFO: Created: latency-svc-k8mlb
Jan  2 20:31:45.059: INFO: Got endpoints: latency-svc-k8mlb [1.169021653s]
Jan  2 20:31:45.222: INFO: Created: latency-svc-p2bdg
Jan  2 20:31:45.235: INFO: Got endpoints: latency-svc-p2bdg [1.344250836s]
Jan  2 20:31:45.510: INFO: Created: latency-svc-9qlbn
Jan  2 20:31:45.528: INFO: Got endpoints: latency-svc-9qlbn [1.637275159s]
Jan  2 20:31:45.683: INFO: Created: latency-svc-tjztb
Jan  2 20:31:45.712: INFO: Got endpoints: latency-svc-tjztb [1.821455304s]
Jan  2 20:31:45.905: INFO: Created: latency-svc-v6q82
Jan  2 20:31:45.967: INFO: Created: latency-svc-955g7
Jan  2 20:31:45.979: INFO: Got endpoints: latency-svc-v6q82 [2.088333657s]
Jan  2 20:31:45.989: INFO: Got endpoints: latency-svc-955g7 [2.099134538s]
Jan  2 20:31:46.113: INFO: Created: latency-svc-gb2hg
Jan  2 20:31:46.122: INFO: Got endpoints: latency-svc-gb2hg [2.230726832s]
Jan  2 20:31:46.401: INFO: Created: latency-svc-mr4jz
Jan  2 20:31:46.412: INFO: Got endpoints: latency-svc-mr4jz [2.520371373s]
Jan  2 20:31:46.631: INFO: Created: latency-svc-c2q9q
Jan  2 20:31:46.654: INFO: Got endpoints: latency-svc-c2q9q [2.763279685s]
Jan  2 20:31:46.901: INFO: Created: latency-svc-7d5jr
Jan  2 20:31:46.919: INFO: Got endpoints: latency-svc-7d5jr [3.027951598s]
Jan  2 20:31:46.994: INFO: Created: latency-svc-bsmkw
Jan  2 20:31:47.150: INFO: Got endpoints: latency-svc-bsmkw [3.042801973s]
Jan  2 20:31:47.198: INFO: Created: latency-svc-58zwv
Jan  2 20:31:47.212: INFO: Got endpoints: latency-svc-58zwv [2.893615752s]
Jan  2 20:31:47.392: INFO: Created: latency-svc-bflsb
Jan  2 20:31:47.409: INFO: Got endpoints: latency-svc-bflsb [3.003116167s]
Jan  2 20:31:47.581: INFO: Created: latency-svc-nncmg
Jan  2 20:31:47.614: INFO: Got endpoints: latency-svc-nncmg [2.967541752s]
Jan  2 20:31:48.490: INFO: Created: latency-svc-g8gjs
Jan  2 20:31:48.677: INFO: Got endpoints: latency-svc-g8gjs [3.913152317s]
Jan  2 20:31:48.730: INFO: Created: latency-svc-n2lgg
Jan  2 20:31:48.737: INFO: Got endpoints: latency-svc-n2lgg [3.677139205s]
Jan  2 20:31:48.891: INFO: Created: latency-svc-4qmnw
Jan  2 20:31:48.921: INFO: Got endpoints: latency-svc-4qmnw [3.685980505s]
Jan  2 20:31:49.085: INFO: Created: latency-svc-psc7x
Jan  2 20:31:49.165: INFO: Got endpoints: latency-svc-psc7x [3.636150593s]
Jan  2 20:31:49.184: INFO: Created: latency-svc-z8rzw
Jan  2 20:31:49.293: INFO: Got endpoints: latency-svc-z8rzw [3.580795286s]
Jan  2 20:31:49.312: INFO: Created: latency-svc-m9m9s
Jan  2 20:31:49.339: INFO: Got endpoints: latency-svc-m9m9s [3.359014611s]
Jan  2 20:31:49.564: INFO: Created: latency-svc-9l55p
Jan  2 20:31:49.565: INFO: Got endpoints: latency-svc-9l55p [3.575436965s]
Jan  2 20:31:49.598: INFO: Created: latency-svc-kpsmj
Jan  2 20:31:49.604: INFO: Got endpoints: latency-svc-kpsmj [3.482168745s]
Jan  2 20:31:49.755: INFO: Created: latency-svc-qqsh6
Jan  2 20:31:49.791: INFO: Got endpoints: latency-svc-qqsh6 [3.379136535s]
Jan  2 20:31:49.857: INFO: Created: latency-svc-qwlz4
Jan  2 20:31:49.951: INFO: Got endpoints: latency-svc-qwlz4 [3.296089366s]
Jan  2 20:31:49.980: INFO: Created: latency-svc-lsxld
Jan  2 20:31:50.013: INFO: Got endpoints: latency-svc-lsxld [3.094169823s]
Jan  2 20:31:50.181: INFO: Created: latency-svc-g9rqn
Jan  2 20:31:50.210: INFO: Got endpoints: latency-svc-g9rqn [3.059543585s]
Jan  2 20:31:50.395: INFO: Created: latency-svc-tv9wz
Jan  2 20:31:50.415: INFO: Got endpoints: latency-svc-tv9wz [3.203064973s]
Jan  2 20:31:50.761: INFO: Created: latency-svc-lmstn
Jan  2 20:31:50.861: INFO: Got endpoints: latency-svc-lmstn [3.451533864s]
Jan  2 20:31:50.898: INFO: Created: latency-svc-t5nns
Jan  2 20:31:50.952: INFO: Got endpoints: latency-svc-t5nns [3.337788398s]
Jan  2 20:31:51.097: INFO: Created: latency-svc-gnccj
Jan  2 20:31:51.127: INFO: Got endpoints: latency-svc-gnccj [2.449248419s]
Jan  2 20:31:51.197: INFO: Created: latency-svc-qljx7
Jan  2 20:31:51.443: INFO: Got endpoints: latency-svc-qljx7 [2.705995084s]
Jan  2 20:31:51.480: INFO: Created: latency-svc-ptd6j
Jan  2 20:31:51.631: INFO: Got endpoints: latency-svc-ptd6j [2.709800196s]
Jan  2 20:31:51.650: INFO: Created: latency-svc-wxx8j
Jan  2 20:31:51.659: INFO: Got endpoints: latency-svc-wxx8j [2.493985027s]
Jan  2 20:31:51.901: INFO: Created: latency-svc-7zztg
Jan  2 20:31:51.917: INFO: Got endpoints: latency-svc-7zztg [2.623353241s]
Jan  2 20:31:51.966: INFO: Created: latency-svc-q4lq9
Jan  2 20:31:51.977: INFO: Got endpoints: latency-svc-q4lq9 [2.638116198s]
Jan  2 20:31:52.113: INFO: Created: latency-svc-flttx
Jan  2 20:31:52.133: INFO: Got endpoints: latency-svc-flttx [2.568364096s]
Jan  2 20:31:52.287: INFO: Created: latency-svc-bldl8
Jan  2 20:31:52.299: INFO: Got endpoints: latency-svc-bldl8 [2.694903969s]
Jan  2 20:31:52.555: INFO: Created: latency-svc-48vcf
Jan  2 20:31:52.588: INFO: Got endpoints: latency-svc-48vcf [2.796654428s]
Jan  2 20:31:52.739: INFO: Created: latency-svc-8g2mf
Jan  2 20:31:52.755: INFO: Got endpoints: latency-svc-8g2mf [2.803962843s]
Jan  2 20:31:52.787: INFO: Created: latency-svc-79sjm
Jan  2 20:31:52.973: INFO: Got endpoints: latency-svc-79sjm [2.959490635s]
Jan  2 20:31:52.985: INFO: Created: latency-svc-4q6qq
Jan  2 20:31:53.003: INFO: Got endpoints: latency-svc-4q6qq [2.793729865s]
Jan  2 20:31:53.076: INFO: Created: latency-svc-wcg8h
Jan  2 20:31:53.156: INFO: Got endpoints: latency-svc-wcg8h [2.741133713s]
Jan  2 20:31:53.230: INFO: Created: latency-svc-98p8x
Jan  2 20:31:53.252: INFO: Got endpoints: latency-svc-98p8x [2.390370439s]
Jan  2 20:31:53.376: INFO: Created: latency-svc-pbjdv
Jan  2 20:31:53.399: INFO: Got endpoints: latency-svc-pbjdv [2.446970923s]
Jan  2 20:31:53.810: INFO: Created: latency-svc-qsxzj
Jan  2 20:31:53.860: INFO: Got endpoints: latency-svc-qsxzj [2.73341022s]
Jan  2 20:31:54.522: INFO: Created: latency-svc-gmm8q
Jan  2 20:31:54.670: INFO: Got endpoints: latency-svc-gmm8q [3.227060232s]
Jan  2 20:31:54.819: INFO: Created: latency-svc-vrc8j
Jan  2 20:31:54.832: INFO: Got endpoints: latency-svc-vrc8j [3.200689309s]
Jan  2 20:31:55.040: INFO: Created: latency-svc-9vlnq
Jan  2 20:31:55.048: INFO: Got endpoints: latency-svc-9vlnq [3.388474376s]
Jan  2 20:31:55.218: INFO: Created: latency-svc-2nqsm
Jan  2 20:31:55.251: INFO: Got endpoints: latency-svc-2nqsm [3.333398001s]
Jan  2 20:31:55.395: INFO: Created: latency-svc-2pjkp
Jan  2 20:31:55.395: INFO: Got endpoints: latency-svc-2pjkp [3.418331968s]
Jan  2 20:31:55.435: INFO: Created: latency-svc-pdrmz
Jan  2 20:31:55.448: INFO: Got endpoints: latency-svc-pdrmz [3.314505304s]
Jan  2 20:31:55.614: INFO: Created: latency-svc-gqkbg
Jan  2 20:31:55.639: INFO: Got endpoints: latency-svc-gqkbg [243.223954ms]
Jan  2 20:31:55.710: INFO: Created: latency-svc-k27wz
Jan  2 20:31:55.839: INFO: Got endpoints: latency-svc-k27wz [3.539651156s]
Jan  2 20:31:55.889: INFO: Created: latency-svc-xxsbw
Jan  2 20:31:55.912: INFO: Got endpoints: latency-svc-xxsbw [3.324156994s]
Jan  2 20:31:56.062: INFO: Created: latency-svc-rlmrr
Jan  2 20:31:56.089: INFO: Got endpoints: latency-svc-rlmrr [3.333770869s]
Jan  2 20:31:56.151: INFO: Created: latency-svc-54279
Jan  2 20:31:56.419: INFO: Got endpoints: latency-svc-54279 [3.445814783s]
Jan  2 20:31:56.439: INFO: Created: latency-svc-qks8v
Jan  2 20:31:56.675: INFO: Got endpoints: latency-svc-qks8v [3.671830623s]
Jan  2 20:31:56.802: INFO: Created: latency-svc-v9cqr
Jan  2 20:31:56.890: INFO: Got endpoints: latency-svc-v9cqr [3.732900158s]
Jan  2 20:31:57.090: INFO: Created: latency-svc-c8mzz
Jan  2 20:31:57.102: INFO: Got endpoints: latency-svc-c8mzz [3.850483112s]
Jan  2 20:31:57.172: INFO: Created: latency-svc-bdf62
Jan  2 20:31:57.241: INFO: Got endpoints: latency-svc-bdf62 [3.841711743s]
Jan  2 20:31:57.272: INFO: Created: latency-svc-dd8zq
Jan  2 20:31:57.304: INFO: Got endpoints: latency-svc-dd8zq [3.443279105s]
Jan  2 20:31:57.330: INFO: Created: latency-svc-qw9sc
Jan  2 20:31:57.421: INFO: Got endpoints: latency-svc-qw9sc [2.750573064s]
Jan  2 20:31:57.442: INFO: Created: latency-svc-ndb74
Jan  2 20:31:57.469: INFO: Got endpoints: latency-svc-ndb74 [2.636737839s]
Jan  2 20:31:57.646: INFO: Created: latency-svc-2jhp9
Jan  2 20:31:57.693: INFO: Got endpoints: latency-svc-2jhp9 [2.645172189s]
Jan  2 20:31:57.925: INFO: Created: latency-svc-5p295
Jan  2 20:31:57.939: INFO: Got endpoints: latency-svc-5p295 [2.687732098s]
Jan  2 20:31:58.077: INFO: Created: latency-svc-84ksb
Jan  2 20:31:58.102: INFO: Got endpoints: latency-svc-84ksb [2.653307552s]
Jan  2 20:31:58.154: INFO: Created: latency-svc-kvgwn
Jan  2 20:31:58.271: INFO: Got endpoints: latency-svc-kvgwn [2.631766168s]
Jan  2 20:31:58.288: INFO: Created: latency-svc-9gz4c
Jan  2 20:31:58.320: INFO: Got endpoints: latency-svc-9gz4c [2.480319924s]
Jan  2 20:31:58.454: INFO: Created: latency-svc-zgqxn
Jan  2 20:31:58.478: INFO: Got endpoints: latency-svc-zgqxn [2.564601788s]
Jan  2 20:31:58.736: INFO: Created: latency-svc-fxzq7
Jan  2 20:31:58.738: INFO: Got endpoints: latency-svc-fxzq7 [2.64926387s]
Jan  2 20:31:58.961: INFO: Created: latency-svc-wk486
Jan  2 20:31:59.177: INFO: Got endpoints: latency-svc-wk486 [2.75736795s]
Jan  2 20:31:59.264: INFO: Created: latency-svc-vjv7r
Jan  2 20:31:59.370: INFO: Got endpoints: latency-svc-vjv7r [2.694028354s]
Jan  2 20:31:59.999: INFO: Created: latency-svc-9x7zw
Jan  2 20:32:00.012: INFO: Got endpoints: latency-svc-9x7zw [3.121869309s]
Jan  2 20:32:00.152: INFO: Created: latency-svc-hzg9j
Jan  2 20:32:00.168: INFO: Got endpoints: latency-svc-hzg9j [3.065159798s]
Jan  2 20:32:00.208: INFO: Created: latency-svc-l2bkg
Jan  2 20:32:00.336: INFO: Got endpoints: latency-svc-l2bkg [3.095267058s]
Jan  2 20:32:00.360: INFO: Created: latency-svc-ngflg
Jan  2 20:32:00.376: INFO: Got endpoints: latency-svc-ngflg [3.071379629s]
Jan  2 20:32:00.416: INFO: Created: latency-svc-qfjx6
Jan  2 20:32:00.523: INFO: Got endpoints: latency-svc-qfjx6 [3.101145691s]
Jan  2 20:32:00.570: INFO: Created: latency-svc-zsxrh
Jan  2 20:32:00.594: INFO: Got endpoints: latency-svc-zsxrh [3.124746612s]
Jan  2 20:32:00.742: INFO: Created: latency-svc-g5klk
Jan  2 20:32:00.774: INFO: Got endpoints: latency-svc-g5klk [3.081011215s]
Jan  2 20:32:00.820: INFO: Created: latency-svc-jrr2j
Jan  2 20:32:00.831: INFO: Got endpoints: latency-svc-jrr2j [2.892172177s]
Jan  2 20:32:01.011: INFO: Created: latency-svc-2gf54
Jan  2 20:32:01.027: INFO: Got endpoints: latency-svc-2gf54 [2.924674807s]
Jan  2 20:32:01.154: INFO: Created: latency-svc-59j84
Jan  2 20:32:01.160: INFO: Got endpoints: latency-svc-59j84 [2.88887281s]
Jan  2 20:32:01.263: INFO: Created: latency-svc-tnnp8
Jan  2 20:32:01.344: INFO: Got endpoints: latency-svc-tnnp8 [3.023494215s]
Jan  2 20:32:01.363: INFO: Created: latency-svc-bt7s5
Jan  2 20:32:01.376: INFO: Got endpoints: latency-svc-bt7s5 [2.898413885s]
Jan  2 20:32:01.416: INFO: Created: latency-svc-7mvtj
Jan  2 20:32:01.467: INFO: Got endpoints: latency-svc-7mvtj [2.728208859s]
Jan  2 20:32:01.568: INFO: Created: latency-svc-4qzzd
Jan  2 20:32:01.583: INFO: Got endpoints: latency-svc-4qzzd [2.406032924s]
Jan  2 20:32:01.644: INFO: Created: latency-svc-rmhrp
Jan  2 20:32:01.751: INFO: Got endpoints: latency-svc-rmhrp [2.381279152s]
Jan  2 20:32:01.771: INFO: Created: latency-svc-l68wh
Jan  2 20:32:01.840: INFO: Got endpoints: latency-svc-l68wh [1.827966036s]
Jan  2 20:32:01.985: INFO: Created: latency-svc-wb869
Jan  2 20:32:02.001: INFO: Got endpoints: latency-svc-wb869 [1.833004225s]
Jan  2 20:32:02.072: INFO: Created: latency-svc-pwdgg
Jan  2 20:32:02.173: INFO: Got endpoints: latency-svc-pwdgg [1.835938845s]
Jan  2 20:32:02.484: INFO: Created: latency-svc-qlhbg
Jan  2 20:32:02.545: INFO: Got endpoints: latency-svc-qlhbg [2.168418823s]
Jan  2 20:32:02.725: INFO: Created: latency-svc-jgw57
Jan  2 20:32:02.741: INFO: Got endpoints: latency-svc-jgw57 [2.21812115s]
Jan  2 20:32:02.892: INFO: Created: latency-svc-dvv6k
Jan  2 20:32:02.925: INFO: Got endpoints: latency-svc-dvv6k [2.331280012s]
Jan  2 20:32:02.983: INFO: Created: latency-svc-znzrx
Jan  2 20:32:03.081: INFO: Got endpoints: latency-svc-znzrx [2.305735571s]
Jan  2 20:32:03.102: INFO: Created: latency-svc-9smtz
Jan  2 20:32:03.120: INFO: Got endpoints: latency-svc-9smtz [2.287788712s]
Jan  2 20:32:03.145: INFO: Created: latency-svc-9n5g2
Jan  2 20:32:03.157: INFO: Got endpoints: latency-svc-9n5g2 [2.129893862s]
Jan  2 20:32:03.289: INFO: Created: latency-svc-d7tz6
Jan  2 20:32:03.310: INFO: Got endpoints: latency-svc-d7tz6 [2.149418968s]
Jan  2 20:32:03.366: INFO: Created: latency-svc-fhww4
Jan  2 20:32:03.384: INFO: Got endpoints: latency-svc-fhww4 [2.039915562s]
Jan  2 20:32:03.503: INFO: Created: latency-svc-tljtd
Jan  2 20:32:03.525: INFO: Got endpoints: latency-svc-tljtd [2.148920111s]
Jan  2 20:32:03.562: INFO: Created: latency-svc-xx986
Jan  2 20:32:03.568: INFO: Got endpoints: latency-svc-xx986 [2.101033286s]
Jan  2 20:32:03.689: INFO: Created: latency-svc-l6hpf
Jan  2 20:32:03.701: INFO: Got endpoints: latency-svc-l6hpf [2.117828145s]
Jan  2 20:32:03.955: INFO: Created: latency-svc-zk2wd
Jan  2 20:32:03.955: INFO: Got endpoints: latency-svc-zk2wd [2.203395168s]
Jan  2 20:32:04.136: INFO: Created: latency-svc-fhhlt
Jan  2 20:32:04.198: INFO: Got endpoints: latency-svc-fhhlt [2.357092584s]
Jan  2 20:32:04.245: INFO: Created: latency-svc-xqrdx
Jan  2 20:32:04.346: INFO: Got endpoints: latency-svc-xqrdx [2.344884503s]
Jan  2 20:32:04.628: INFO: Created: latency-svc-jjrmg
Jan  2 20:32:04.633: INFO: Got endpoints: latency-svc-jjrmg [2.460465572s]
Jan  2 20:32:04.694: INFO: Created: latency-svc-784nv
Jan  2 20:32:04.903: INFO: Created: latency-svc-zvnzb
Jan  2 20:32:05.111: INFO: Created: latency-svc-cdgrw
Jan  2 20:32:05.118: INFO: Got endpoints: latency-svc-784nv [2.572893395s]
Jan  2 20:32:05.133: INFO: Got endpoints: latency-svc-zvnzb [2.391546231s]
Jan  2 20:32:05.136: INFO: Got endpoints: latency-svc-cdgrw [2.210291959s]
Jan  2 20:32:05.186: INFO: Created: latency-svc-28jtk
Jan  2 20:32:05.261: INFO: Got endpoints: latency-svc-28jtk [2.180083821s]
Jan  2 20:32:05.307: INFO: Created: latency-svc-gkh28
Jan  2 20:32:05.322: INFO: Got endpoints: latency-svc-gkh28 [2.202261204s]
Jan  2 20:32:05.472: INFO: Created: latency-svc-hkfvb
Jan  2 20:32:05.570: INFO: Got endpoints: latency-svc-hkfvb [2.413026942s]
Jan  2 20:32:05.774: INFO: Created: latency-svc-b2xl8
Jan  2 20:32:05.812: INFO: Got endpoints: latency-svc-b2xl8 [2.502299028s]
Jan  2 20:32:06.092: INFO: Created: latency-svc-hs26w
Jan  2 20:32:06.117: INFO: Got endpoints: latency-svc-hs26w [2.733272126s]
Jan  2 20:32:06.258: INFO: Created: latency-svc-dbld8
Jan  2 20:32:06.320: INFO: Got endpoints: latency-svc-dbld8 [2.794579093s]
Jan  2 20:32:06.323: INFO: Created: latency-svc-rv9hd
Jan  2 20:32:06.347: INFO: Got endpoints: latency-svc-rv9hd [2.779329795s]
Jan  2 20:32:06.694: INFO: Created: latency-svc-8njd5
Jan  2 20:32:06.898: INFO: Got endpoints: latency-svc-8njd5 [3.197104946s]
Jan  2 20:32:06.921: INFO: Created: latency-svc-425cv
Jan  2 20:32:06.952: INFO: Got endpoints: latency-svc-425cv [2.996488441s]
Jan  2 20:32:07.216: INFO: Created: latency-svc-kb98t
Jan  2 20:32:07.216: INFO: Got endpoints: latency-svc-kb98t [3.017894491s]
Jan  2 20:32:07.299: INFO: Created: latency-svc-zlf2g
Jan  2 20:32:07.306: INFO: Got endpoints: latency-svc-zlf2g [2.95950167s]
Jan  2 20:32:07.358: INFO: Created: latency-svc-9rjd4
Jan  2 20:32:07.366: INFO: Got endpoints: latency-svc-9rjd4 [2.732469167s]
Jan  2 20:32:07.513: INFO: Created: latency-svc-xj5ld
Jan  2 20:32:07.579: INFO: Created: latency-svc-cqjb7
Jan  2 20:32:07.580: INFO: Got endpoints: latency-svc-xj5ld [2.461670316s]
Jan  2 20:32:07.679: INFO: Got endpoints: latency-svc-cqjb7 [2.543131312s]
Jan  2 20:32:07.709: INFO: Created: latency-svc-lsz9h
Jan  2 20:32:07.724: INFO: Got endpoints: latency-svc-lsz9h [2.590755762s]
Jan  2 20:32:07.914: INFO: Created: latency-svc-w7bdj
Jan  2 20:32:07.946: INFO: Got endpoints: latency-svc-w7bdj [2.684714312s]
Jan  2 20:32:08.101: INFO: Created: latency-svc-m5rck
Jan  2 20:32:08.105: INFO: Got endpoints: latency-svc-m5rck [2.782537212s]
Jan  2 20:32:08.167: INFO: Created: latency-svc-frzhp
Jan  2 20:32:08.181: INFO: Got endpoints: latency-svc-frzhp [2.610949489s]
Jan  2 20:32:08.365: INFO: Created: latency-svc-b9rtv
Jan  2 20:32:08.366: INFO: Got endpoints: latency-svc-b9rtv [2.553063953s]
Jan  2 20:32:08.486: INFO: Created: latency-svc-x8wsj
Jan  2 20:32:08.551: INFO: Created: latency-svc-vxkxw
Jan  2 20:32:08.692: INFO: Got endpoints: latency-svc-x8wsj [2.574155393s]
Jan  2 20:32:08.696: INFO: Got endpoints: latency-svc-vxkxw [2.374846157s]
Jan  2 20:32:08.743: INFO: Created: latency-svc-khkrb
Jan  2 20:32:08.755: INFO: Got endpoints: latency-svc-khkrb [2.407859161s]
Jan  2 20:32:08.916: INFO: Created: latency-svc-k7gsz
Jan  2 20:32:08.939: INFO: Got endpoints: latency-svc-k7gsz [2.03962077s]
Jan  2 20:32:09.093: INFO: Created: latency-svc-lpbsl
Jan  2 20:32:09.104: INFO: Got endpoints: latency-svc-lpbsl [2.151692998s]
Jan  2 20:32:09.159: INFO: Created: latency-svc-zmdkd
Jan  2 20:32:09.163: INFO: Got endpoints: latency-svc-zmdkd [1.946454002s]
Jan  2 20:32:09.282: INFO: Created: latency-svc-22hgs
Jan  2 20:32:09.288: INFO: Got endpoints: latency-svc-22hgs [1.981517786s]
Jan  2 20:32:09.327: INFO: Created: latency-svc-5d2gs
Jan  2 20:32:09.452: INFO: Got endpoints: latency-svc-5d2gs [2.085834407s]
Jan  2 20:32:09.501: INFO: Created: latency-svc-cxs8b
Jan  2 20:32:09.501: INFO: Got endpoints: latency-svc-cxs8b [1.921399171s]
Jan  2 20:32:09.661: INFO: Created: latency-svc-s54bb
Jan  2 20:32:09.669: INFO: Got endpoints: latency-svc-s54bb [1.989959712s]
Jan  2 20:32:09.720: INFO: Created: latency-svc-gqzzt
Jan  2 20:32:09.746: INFO: Got endpoints: latency-svc-gqzzt [2.022037961s]
Jan  2 20:32:09.902: INFO: Created: latency-svc-dsq99
Jan  2 20:32:09.938: INFO: Got endpoints: latency-svc-dsq99 [1.992249618s]
Jan  2 20:32:10.203: INFO: Created: latency-svc-6rz7c
Jan  2 20:32:10.281: INFO: Got endpoints: latency-svc-6rz7c [2.176366895s]
Jan  2 20:32:10.327: INFO: Created: latency-svc-nr6p9
Jan  2 20:32:10.365: INFO: Got endpoints: latency-svc-nr6p9 [2.183001902s]
Jan  2 20:32:10.559: INFO: Created: latency-svc-d7gz4
Jan  2 20:32:10.725: INFO: Got endpoints: latency-svc-d7gz4 [2.358938693s]
Jan  2 20:32:10.749: INFO: Created: latency-svc-2qlsn
Jan  2 20:32:10.777: INFO: Got endpoints: latency-svc-2qlsn [2.08154098s]
Jan  2 20:32:10.995: INFO: Created: latency-svc-8tww4
Jan  2 20:32:11.032: INFO: Got endpoints: latency-svc-8tww4 [2.339440937s]
Jan  2 20:32:11.244: INFO: Created: latency-svc-fsjr2
Jan  2 20:32:11.244: INFO: Got endpoints: latency-svc-fsjr2 [2.48833292s]
Jan  2 20:32:11.322: INFO: Created: latency-svc-h7s6t
Jan  2 20:32:11.374: INFO: Got endpoints: latency-svc-h7s6t [2.434790194s]
Jan  2 20:32:11.388: INFO: Created: latency-svc-b2ncx
Jan  2 20:32:11.407: INFO: Got endpoints: latency-svc-b2ncx [2.303535255s]
Jan  2 20:32:11.571: INFO: Created: latency-svc-dght8
Jan  2 20:32:11.627: INFO: Got endpoints: latency-svc-dght8 [2.464133907s]
Jan  2 20:32:11.631: INFO: Created: latency-svc-jmwgj
Jan  2 20:32:11.751: INFO: Got endpoints: latency-svc-jmwgj [2.463269639s]
Jan  2 20:32:11.784: INFO: Created: latency-svc-p668r
Jan  2 20:32:11.814: INFO: Got endpoints: latency-svc-p668r [2.36160584s]
Jan  2 20:32:12.644: INFO: Created: latency-svc-clz5w
Jan  2 20:32:13.215: INFO: Got endpoints: latency-svc-clz5w [3.713630411s]
Jan  2 20:32:13.280: INFO: Created: latency-svc-nxv6s
Jan  2 20:32:13.409: INFO: Got endpoints: latency-svc-nxv6s [3.739690671s]
Jan  2 20:32:13.444: INFO: Created: latency-svc-qjcp7
Jan  2 20:32:13.464: INFO: Got endpoints: latency-svc-qjcp7 [3.717723884s]
Jan  2 20:32:13.673: INFO: Created: latency-svc-2vtnp
Jan  2 20:32:13.674: INFO: Got endpoints: latency-svc-2vtnp [3.734932298s]
Jan  2 20:32:13.836: INFO: Created: latency-svc-98x5s
Jan  2 20:32:13.887: INFO: Got endpoints: latency-svc-98x5s [3.605499247s]
Jan  2 20:32:14.042: INFO: Created: latency-svc-9ssf9
Jan  2 20:32:14.117: INFO: Got endpoints: latency-svc-9ssf9 [3.752276901s]
Jan  2 20:32:14.292: INFO: Created: latency-svc-7pt7p
Jan  2 20:32:14.299: INFO: Got endpoints: latency-svc-7pt7p [3.574466739s]
Jan  2 20:32:14.405: INFO: Created: latency-svc-r4lnw
Jan  2 20:32:14.426: INFO: Got endpoints: latency-svc-r4lnw [3.648955854s]
Jan  2 20:32:14.576: INFO: Created: latency-svc-fjwmg
Jan  2 20:32:14.764: INFO: Got endpoints: latency-svc-fjwmg [3.732488984s]
Jan  2 20:32:14.786: INFO: Created: latency-svc-5228p
Jan  2 20:32:14.804: INFO: Got endpoints: latency-svc-5228p [3.560342764s]
Jan  2 20:32:15.006: INFO: Created: latency-svc-zhjd2
Jan  2 20:32:15.144: INFO: Got endpoints: latency-svc-zhjd2 [3.770112143s]
Jan  2 20:32:15.149: INFO: Created: latency-svc-nw6jk
Jan  2 20:32:15.316: INFO: Got endpoints: latency-svc-nw6jk [3.908458178s]
Jan  2 20:32:15.331: INFO: Created: latency-svc-kwfpt
Jan  2 20:32:15.355: INFO: Got endpoints: latency-svc-kwfpt [3.728164389s]
Jan  2 20:32:15.613: INFO: Created: latency-svc-4l6sp
Jan  2 20:32:15.629: INFO: Got endpoints: latency-svc-4l6sp [3.87804703s]
Jan  2 20:32:15.846: INFO: Created: latency-svc-txlzs
Jan  2 20:32:15.876: INFO: Got endpoints: latency-svc-txlzs [4.061316659s]
Jan  2 20:32:15.937: INFO: Created: latency-svc-767n8
Jan  2 20:32:16.070: INFO: Got endpoints: latency-svc-767n8 [2.854447893s]
Jan  2 20:32:16.119: INFO: Created: latency-svc-82dgw
Jan  2 20:32:16.146: INFO: Got endpoints: latency-svc-82dgw [2.736908866s]
Jan  2 20:32:16.276: INFO: Created: latency-svc-8s8l7
Jan  2 20:32:16.291: INFO: Got endpoints: latency-svc-8s8l7 [2.82626244s]
Jan  2 20:32:16.352: INFO: Created: latency-svc-pfkmm
Jan  2 20:32:16.449: INFO: Got endpoints: latency-svc-pfkmm [2.775307581s]
Jan  2 20:32:16.521: INFO: Created: latency-svc-rfp4t
Jan  2 20:32:16.643: INFO: Got endpoints: latency-svc-rfp4t [2.755179737s]
Jan  2 20:32:16.687: INFO: Created: latency-svc-nxhjm
Jan  2 20:32:16.700: INFO: Got endpoints: latency-svc-nxhjm [2.582255923s]
Jan  2 20:32:16.849: INFO: Created: latency-svc-5x2g2
Jan  2 20:32:16.875: INFO: Got endpoints: latency-svc-5x2g2 [2.57503573s]
Jan  2 20:32:16.999: INFO: Created: latency-svc-dvhzf
Jan  2 20:32:17.244: INFO: Got endpoints: latency-svc-dvhzf [2.817046663s]
Jan  2 20:32:17.342: INFO: Created: latency-svc-lst2k
Jan  2 20:32:17.499: INFO: Got endpoints: latency-svc-lst2k [2.73457344s]
Jan  2 20:32:17.672: INFO: Created: latency-svc-z624s
Jan  2 20:32:18.329: INFO: Got endpoints: latency-svc-z624s [3.525039175s]
Jan  2 20:32:18.377: INFO: Created: latency-svc-4dpgh
Jan  2 20:32:18.458: INFO: Got endpoints: latency-svc-4dpgh [3.313683082s]
Jan  2 20:32:18.711: INFO: Created: latency-svc-j465l
Jan  2 20:32:18.728: INFO: Got endpoints: latency-svc-j465l [3.412081041s]
Jan  2 20:32:18.779: INFO: Created: latency-svc-p4xmz
Jan  2 20:32:18.795: INFO: Got endpoints: latency-svc-p4xmz [3.439645312s]
Jan  2 20:32:18.996: INFO: Created: latency-svc-5swkz
Jan  2 20:32:19.015: INFO: Got endpoints: latency-svc-5swkz [3.385271518s]
Jan  2 20:32:19.142: INFO: Created: latency-svc-gvnrb
Jan  2 20:32:19.159: INFO: Got endpoints: latency-svc-gvnrb [3.283209952s]
Jan  2 20:32:19.341: INFO: Created: latency-svc-lmd8m
Jan  2 20:32:19.351: INFO: Got endpoints: latency-svc-lmd8m [3.280757426s]
Jan  2 20:32:19.369: INFO: Created: latency-svc-lwk8w
Jan  2 20:32:19.384: INFO: Got endpoints: latency-svc-lwk8w [3.23713438s]
Jan  2 20:32:19.421: INFO: Created: latency-svc-qprrw
Jan  2 20:32:19.574: INFO: Got endpoints: latency-svc-qprrw [3.282654662s]
Jan  2 20:32:19.587: INFO: Created: latency-svc-24hlr
Jan  2 20:32:19.596: INFO: Got endpoints: latency-svc-24hlr [3.146580406s]
Jan  2 20:32:19.656: INFO: Created: latency-svc-phfp4
Jan  2 20:32:19.839: INFO: Got endpoints: latency-svc-phfp4 [3.196386625s]
Jan  2 20:32:19.874: INFO: Created: latency-svc-g2q2m
Jan  2 20:32:19.885: INFO: Got endpoints: latency-svc-g2q2m [3.184724256s]
Jan  2 20:32:20.049: INFO: Created: latency-svc-n67j9
Jan  2 20:32:20.060: INFO: Got endpoints: latency-svc-n67j9 [3.185241642s]
Jan  2 20:32:20.119: INFO: Created: latency-svc-z7ld5
Jan  2 20:32:20.128: INFO: Got endpoints: latency-svc-z7ld5 [2.883490706s]
Jan  2 20:32:20.293: INFO: Created: latency-svc-dgwjn
Jan  2 20:32:20.300: INFO: Got endpoints: latency-svc-dgwjn [2.799981018s]
Jan  2 20:32:20.380: INFO: Created: latency-svc-tg57b
Jan  2 20:32:20.454: INFO: Got endpoints: latency-svc-tg57b [2.124748836s]
Jan  2 20:32:20.502: INFO: Created: latency-svc-f6x9d
Jan  2 20:32:20.529: INFO: Got endpoints: latency-svc-f6x9d [2.070721681s]
Jan  2 20:32:20.731: INFO: Created: latency-svc-49ncl
Jan  2 20:32:20.899: INFO: Got endpoints: latency-svc-49ncl [2.170190121s]
Jan  2 20:32:20.908: INFO: Created: latency-svc-7gwc7
Jan  2 20:32:20.933: INFO: Got endpoints: latency-svc-7gwc7 [2.137189009s]
Jan  2 20:32:21.007: INFO: Created: latency-svc-tj6sq
Jan  2 20:32:21.071: INFO: Got endpoints: latency-svc-tj6sq [2.056415369s]
Jan  2 20:32:21.132: INFO: Created: latency-svc-d6j7m
Jan  2 20:32:21.135: INFO: Got endpoints: latency-svc-d6j7m [1.975666408s]
Jan  2 20:32:21.135: INFO: Latencies: [216.868903ms 243.223954ms 429.621444ms 517.69045ms 756.377363ms 874.247564ms 1.169021653s 1.344250836s 1.637275159s 1.821455304s 1.827966036s 1.833004225s 1.835938845s 1.921399171s 1.946454002s 1.975666408s 1.981517786s 1.989959712s 1.992249618s 2.022037961s 2.03962077s 2.039915562s 2.056415369s 2.070721681s 2.08154098s 2.085834407s 2.088333657s 2.099134538s 2.101033286s 2.117828145s 2.124748836s 2.129893862s 2.137189009s 2.148920111s 2.149418968s 2.151692998s 2.168418823s 2.170190121s 2.176366895s 2.180083821s 2.183001902s 2.202261204s 2.203395168s 2.210291959s 2.21812115s 2.230726832s 2.287788712s 2.303535255s 2.305735571s 2.331280012s 2.339440937s 2.344884503s 2.357092584s 2.358938693s 2.36160584s 2.374846157s 2.381279152s 2.390370439s 2.391546231s 2.406032924s 2.407859161s 2.413026942s 2.434790194s 2.446970923s 2.449248419s 2.460465572s 2.461670316s 2.463269639s 2.464133907s 2.480319924s 2.48833292s 2.493985027s 2.502299028s 2.520371373s 2.543131312s 2.553063953s 2.564601788s 2.568364096s 2.572893395s 2.574155393s 2.57503573s 2.582255923s 2.590755762s 2.610949489s 2.623353241s 2.631766168s 2.636737839s 2.638116198s 2.645172189s 2.64926387s 2.653307552s 2.684714312s 2.687732098s 2.694028354s 2.694903969s 2.705995084s 2.709800196s 2.728208859s 2.732469167s 2.733272126s 2.73341022s 2.73457344s 2.736908866s 2.741133713s 2.750573064s 2.755179737s 2.75736795s 2.763279685s 2.775307581s 2.779329795s 2.782537212s 2.793729865s 2.794579093s 2.796654428s 2.799981018s 2.803962843s 2.817046663s 2.82626244s 2.854447893s 2.883490706s 2.88887281s 2.892172177s 2.893615752s 2.898413885s 2.924674807s 2.959490635s 2.95950167s 2.967541752s 2.996488441s 3.003116167s 3.017894491s 3.023494215s 3.027951598s 3.042801973s 3.059543585s 3.065159798s 3.071379629s 3.081011215s 3.094169823s 3.095267058s 3.101145691s 3.121869309s 3.124746612s 3.146580406s 3.184724256s 3.185241642s 3.196386625s 3.197104946s 3.200689309s 3.203064973s 3.227060232s 3.23713438s 3.280757426s 3.282654662s 3.283209952s 3.296089366s 3.313683082s 3.314505304s 3.324156994s 3.333398001s 3.333770869s 3.337788398s 3.359014611s 3.379136535s 3.385271518s 3.388474376s 3.412081041s 3.418331968s 3.439645312s 3.443279105s 3.445814783s 3.451533864s 3.482168745s 3.525039175s 3.539651156s 3.560342764s 3.574466739s 3.575436965s 3.580795286s 3.605499247s 3.636150593s 3.648955854s 3.671830623s 3.677139205s 3.685980505s 3.713630411s 3.717723884s 3.728164389s 3.732488984s 3.732900158s 3.734932298s 3.739690671s 3.752276901s 3.770112143s 3.841711743s 3.850483112s 3.87804703s 3.908458178s 3.913152317s 4.061316659s]
Jan  2 20:32:21.135: INFO: 50 %ile: 2.73341022s
Jan  2 20:32:21.135: INFO: 90 %ile: 3.636150593s
Jan  2 20:32:21.135: INFO: 99 %ile: 3.913152317s
Jan  2 20:32:21.135: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:32:21.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-72kl2" for this suite.
Jan  2 20:33:15.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:33:15.351: INFO: namespace: e2e-tests-svc-latency-72kl2, resource: bindings, ignored listing per whitelist
Jan  2 20:33:15.377: INFO: namespace e2e-tests-svc-latency-72kl2 deletion completed in 54.231741195s

• [SLOW TEST:102.148 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:33:15.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:34:15.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-bs6tn" for this suite.
Jan  2 20:34:37.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:34:37.961: INFO: namespace: e2e-tests-container-probe-bs6tn, resource: bindings, ignored listing per whitelist
Jan  2 20:34:38.041: INFO: namespace e2e-tests-container-probe-bs6tn deletion completed in 22.1842579s

• [SLOW TEST:82.664 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:34:38.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  2 20:34:38.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:38.925: INFO: stderr: ""
Jan  2 20:34:38.925: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 20:34:38.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:39.180: INFO: stderr: ""
Jan  2 20:34:39.180: INFO: stdout: "update-demo-nautilus-s68sz update-demo-nautilus-tqrrn "
Jan  2 20:34:39.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s68sz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:39.345: INFO: stderr: ""
Jan  2 20:34:39.345: INFO: stdout: ""
Jan  2 20:34:39.345: INFO: update-demo-nautilus-s68sz is created but not running
Jan  2 20:34:44.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:44.657: INFO: stderr: ""
Jan  2 20:34:44.657: INFO: stdout: "update-demo-nautilus-s68sz update-demo-nautilus-tqrrn "
Jan  2 20:34:44.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s68sz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:44.787: INFO: stderr: ""
Jan  2 20:34:44.787: INFO: stdout: ""
Jan  2 20:34:44.787: INFO: update-demo-nautilus-s68sz is created but not running
Jan  2 20:34:49.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:49.974: INFO: stderr: ""
Jan  2 20:34:49.974: INFO: stdout: "update-demo-nautilus-s68sz update-demo-nautilus-tqrrn "
Jan  2 20:34:49.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s68sz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:50.098: INFO: stderr: ""
Jan  2 20:34:50.098: INFO: stdout: ""
Jan  2 20:34:50.098: INFO: update-demo-nautilus-s68sz is created but not running
Jan  2 20:34:55.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:55.289: INFO: stderr: ""
Jan  2 20:34:55.289: INFO: stdout: "update-demo-nautilus-s68sz update-demo-nautilus-tqrrn "
Jan  2 20:34:55.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s68sz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:55.480: INFO: stderr: ""
Jan  2 20:34:55.480: INFO: stdout: "true"
Jan  2 20:34:55.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s68sz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:55.597: INFO: stderr: ""
Jan  2 20:34:55.597: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 20:34:55.597: INFO: validating pod update-demo-nautilus-s68sz
Jan  2 20:34:55.648: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 20:34:55.648: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 20:34:55.648: INFO: update-demo-nautilus-s68sz is verified up and running
Jan  2 20:34:55.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tqrrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:55.773: INFO: stderr: ""
Jan  2 20:34:55.773: INFO: stdout: "true"
Jan  2 20:34:55.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tqrrn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:55.933: INFO: stderr: ""
Jan  2 20:34:55.933: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 20:34:55.933: INFO: validating pod update-demo-nautilus-tqrrn
Jan  2 20:34:55.948: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 20:34:55.948: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 20:34:55.948: INFO: update-demo-nautilus-tqrrn is verified up and running
STEP: using delete to clean up resources
Jan  2 20:34:55.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:56.115: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 20:34:56.115: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  2 20:34:56.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-5khwc'
Jan  2 20:34:56.432: INFO: stderr: "No resources found.\n"
Jan  2 20:34:56.432: INFO: stdout: ""
Jan  2 20:34:56.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-5khwc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 20:34:56.613: INFO: stderr: ""
Jan  2 20:34:56.613: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:34:56.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5khwc" for this suite.
Jan  2 20:35:20.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:35:20.786: INFO: namespace: e2e-tests-kubectl-5khwc, resource: bindings, ignored listing per whitelist
Jan  2 20:35:20.815: INFO: namespace e2e-tests-kubectl-5khwc deletion completed in 24.168445946s

• [SLOW TEST:42.773 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:35:20.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  2 20:35:21.175: INFO: Waiting up to 5m0s for pod "pod-6552c7bb-2d9f-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-9rfj6" to be "success or failure"
Jan  2 20:35:21.201: INFO: Pod "pod-6552c7bb-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.507484ms
Jan  2 20:35:23.289: INFO: Pod "pod-6552c7bb-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113469468s
Jan  2 20:35:25.310: INFO: Pod "pod-6552c7bb-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134821464s
Jan  2 20:35:27.330: INFO: Pod "pod-6552c7bb-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154943033s
Jan  2 20:35:29.351: INFO: Pod "pod-6552c7bb-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175829297s
Jan  2 20:35:31.370: INFO: Pod "pod-6552c7bb-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194421139s
Jan  2 20:35:33.394: INFO: Pod "pod-6552c7bb-2d9f-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.218509331s
STEP: Saw pod success
Jan  2 20:35:33.394: INFO: Pod "pod-6552c7bb-2d9f-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:35:33.427: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6552c7bb-2d9f-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 20:35:34.427: INFO: Waiting for pod pod-6552c7bb-2d9f-11ea-814c-0242ac110005 to disappear
Jan  2 20:35:34.607: INFO: Pod pod-6552c7bb-2d9f-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:35:34.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9rfj6" for this suite.
Jan  2 20:35:40.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:35:41.015: INFO: namespace: e2e-tests-emptydir-9rfj6, resource: bindings, ignored listing per whitelist
Jan  2 20:35:41.018: INFO: namespace e2e-tests-emptydir-9rfj6 deletion completed in 6.380211991s

• [SLOW TEST:20.203 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:35:41.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-7152252c-2d9f-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 20:35:41.274: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-71534077-2d9f-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-54jzn" to be "success or failure"
Jan  2 20:35:41.304: INFO: Pod "pod-projected-configmaps-71534077-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.952585ms
Jan  2 20:35:43.323: INFO: Pod "pod-projected-configmaps-71534077-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048709081s
Jan  2 20:35:45.344: INFO: Pod "pod-projected-configmaps-71534077-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06944094s
Jan  2 20:35:47.743: INFO: Pod "pod-projected-configmaps-71534077-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.468434304s
Jan  2 20:35:49.760: INFO: Pod "pod-projected-configmaps-71534077-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.486099007s
Jan  2 20:35:51.780: INFO: Pod "pod-projected-configmaps-71534077-2d9f-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.505500022s
STEP: Saw pod success
Jan  2 20:35:51.780: INFO: Pod "pod-projected-configmaps-71534077-2d9f-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:35:51.789: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-71534077-2d9f-11ea-814c-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 20:35:52.871: INFO: Waiting for pod pod-projected-configmaps-71534077-2d9f-11ea-814c-0242ac110005 to disappear
Jan  2 20:35:53.411: INFO: Pod pod-projected-configmaps-71534077-2d9f-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:35:53.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-54jzn" for this suite.
Jan  2 20:35:59.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:35:59.619: INFO: namespace: e2e-tests-projected-54jzn, resource: bindings, ignored listing per whitelist
Jan  2 20:35:59.772: INFO: namespace e2e-tests-projected-54jzn deletion completed in 6.339086697s

• [SLOW TEST:18.752 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:35:59.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-snt6f
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-snt6f to expose endpoints map[]
Jan  2 20:36:00.038: INFO: Get endpoints failed (11.733111ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan  2 20:36:01.055: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-snt6f exposes endpoints map[] (1.028873854s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-snt6f
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-snt6f to expose endpoints map[pod1:[100]]
Jan  2 20:36:05.672: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.571552718s elapsed, will retry)
Jan  2 20:36:11.674: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-snt6f exposes endpoints map[pod1:[100]] (10.57319862s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-snt6f
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-snt6f to expose endpoints map[pod2:[101] pod1:[100]]
Jan  2 20:36:16.809: INFO: Unexpected endpoints: found map[7d220fcb-2d9f-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.118673267s elapsed, will retry)
Jan  2 20:36:22.851: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-snt6f exposes endpoints map[pod1:[100] pod2:[101]] (11.160473721s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-snt6f
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-snt6f to expose endpoints map[pod2:[101]]
Jan  2 20:36:24.140: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-snt6f exposes endpoints map[pod2:[101]] (1.265324686s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-snt6f
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-snt6f to expose endpoints map[]
Jan  2 20:36:26.337: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-snt6f exposes endpoints map[] (1.957937435s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:36:27.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-snt6f" for this suite.
Jan  2 20:36:50.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:36:50.260: INFO: namespace: e2e-tests-services-snt6f, resource: bindings, ignored listing per whitelist
Jan  2 20:36:50.374: INFO: namespace e2e-tests-services-snt6f deletion completed in 22.355319543s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:50.602 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:36:50.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-p7zlv in namespace e2e-tests-proxy-h2tql
I0102 20:36:50.887573       8 runners.go:184] Created replication controller with name: proxy-service-p7zlv, namespace: e2e-tests-proxy-h2tql, replica count: 1
I0102 20:36:51.938840       8 runners.go:184] proxy-service-p7zlv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:36:52.939704       8 runners.go:184] proxy-service-p7zlv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:36:53.940509       8 runners.go:184] proxy-service-p7zlv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:36:54.941588       8 runners.go:184] proxy-service-p7zlv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:36:55.942199       8 runners.go:184] proxy-service-p7zlv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:36:56.942685       8 runners.go:184] proxy-service-p7zlv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:36:57.943498       8 runners.go:184] proxy-service-p7zlv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:36:58.944297       8 runners.go:184] proxy-service-p7zlv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:36:59.944861       8 runners.go:184] proxy-service-p7zlv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 20:37:00.946233       8 runners.go:184] proxy-service-p7zlv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 20:37:01.946952       8 runners.go:184] proxy-service-p7zlv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  2 20:37:01.969: INFO: setup took 11.259148785s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  2 20:37:01.996: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-h2tql/pods/proxy-service-p7zlv-vxrpm:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan  2 20:37:17.959: INFO: Waiting up to 5m0s for pod "client-containers-aaf3d8ca-2d9f-11ea-814c-0242ac110005" in namespace "e2e-tests-containers-pzp6t" to be "success or failure"
Jan  2 20:37:17.973: INFO: Pod "client-containers-aaf3d8ca-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.240885ms
Jan  2 20:37:20.456: INFO: Pod "client-containers-aaf3d8ca-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.497712838s
Jan  2 20:37:22.498: INFO: Pod "client-containers-aaf3d8ca-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538969811s
Jan  2 20:37:24.520: INFO: Pod "client-containers-aaf3d8ca-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.561681386s
Jan  2 20:37:26.552: INFO: Pod "client-containers-aaf3d8ca-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593336651s
Jan  2 20:37:28.769: INFO: Pod "client-containers-aaf3d8ca-2d9f-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.80986179s
Jan  2 20:37:30.780: INFO: Pod "client-containers-aaf3d8ca-2d9f-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.821632084s
STEP: Saw pod success
Jan  2 20:37:30.781: INFO: Pod "client-containers-aaf3d8ca-2d9f-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:37:30.785: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-aaf3d8ca-2d9f-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 20:37:31.032: INFO: Waiting for pod client-containers-aaf3d8ca-2d9f-11ea-814c-0242ac110005 to disappear
Jan  2 20:37:31.039: INFO: Pod client-containers-aaf3d8ca-2d9f-11ea-814c-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:37:31.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-pzp6t" for this suite.
Jan  2 20:37:37.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:37:37.144: INFO: namespace: e2e-tests-containers-pzp6t, resource: bindings, ignored listing per whitelist
Jan  2 20:37:37.269: INFO: namespace e2e-tests-containers-pzp6t deletion completed in 6.222790457s

• [SLOW TEST:19.608 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:37:37.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-p8b82
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-p8b82
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-p8b82
Jan  2 20:37:37.498: INFO: Found 0 stateful pods, waiting for 1
Jan  2 20:37:47.519: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan  2 20:37:47.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 20:37:48.471: INFO: stderr: "I0102 20:37:47.791344    1805 log.go:172] (0xc000138630) (0xc000738640) Create stream\nI0102 20:37:47.791701    1805 log.go:172] (0xc000138630) (0xc000738640) Stream added, broadcasting: 1\nI0102 20:37:47.799435    1805 log.go:172] (0xc000138630) Reply frame received for 1\nI0102 20:37:47.799519    1805 log.go:172] (0xc000138630) (0xc0006a2be0) Create stream\nI0102 20:37:47.799528    1805 log.go:172] (0xc000138630) (0xc0006a2be0) Stream added, broadcasting: 3\nI0102 20:37:47.800789    1805 log.go:172] (0xc000138630) Reply frame received for 3\nI0102 20:37:47.800838    1805 log.go:172] (0xc000138630) (0xc0004f2000) Create stream\nI0102 20:37:47.800852    1805 log.go:172] (0xc000138630) (0xc0004f2000) Stream added, broadcasting: 5\nI0102 20:37:47.801999    1805 log.go:172] (0xc000138630) Reply frame received for 5\nI0102 20:37:48.154872    1805 log.go:172] (0xc000138630) Data frame received for 3\nI0102 20:37:48.155056    1805 log.go:172] (0xc0006a2be0) (3) Data frame handling\nI0102 20:37:48.155091    1805 log.go:172] (0xc0006a2be0) (3) Data frame sent\nI0102 20:37:48.439924    1805 log.go:172] (0xc000138630) Data frame received for 1\nI0102 20:37:48.440167    1805 log.go:172] (0xc000738640) (1) Data frame handling\nI0102 20:37:48.440223    1805 log.go:172] (0xc000738640) (1) Data frame sent\nI0102 20:37:48.440793    1805 log.go:172] (0xc000138630) (0xc000738640) Stream removed, broadcasting: 1\nI0102 20:37:48.441401    1805 log.go:172] (0xc000138630) (0xc0006a2be0) Stream removed, broadcasting: 3\nI0102 20:37:48.442821    1805 log.go:172] (0xc000138630) (0xc0004f2000) Stream removed, broadcasting: 5\nI0102 20:37:48.443008    1805 log.go:172] (0xc000138630) Go away received\nI0102 20:37:48.443175    1805 log.go:172] (0xc000138630) (0xc000738640) Stream removed, broadcasting: 1\nI0102 20:37:48.443225    1805 log.go:172] (0xc000138630) (0xc0006a2be0) Stream removed, broadcasting: 3\nI0102 20:37:48.443236    1805 log.go:172] (0xc000138630) (0xc0004f2000) Stream removed, broadcasting: 5\n"
Jan  2 20:37:48.471: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 20:37:48.471: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 20:37:48.504: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 20:37:48.504: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 20:37:48.672: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 20:37:48.673: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  }]
Jan  2 20:37:48.673: INFO: 
Jan  2 20:37:48.673: INFO: StatefulSet ss has not reached scale 3, at 1
Jan  2 20:37:50.156: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.95351189s
Jan  2 20:37:51.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.469760257s
Jan  2 20:37:52.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.812313214s
Jan  2 20:37:53.897: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.793471061s
Jan  2 20:37:54.908: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.729835488s
Jan  2 20:37:56.003: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.718064808s
Jan  2 20:37:58.776: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.622798262s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-p8b82
Jan  2 20:37:59.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:38:00.737: INFO: stderr: "I0102 20:38:00.065272    1828 log.go:172] (0xc0008522c0) (0xc000940640) Create stream\nI0102 20:38:00.065498    1828 log.go:172] (0xc0008522c0) (0xc000940640) Stream added, broadcasting: 1\nI0102 20:38:00.076816    1828 log.go:172] (0xc0008522c0) Reply frame received for 1\nI0102 20:38:00.076990    1828 log.go:172] (0xc0008522c0) (0xc0007d2dc0) Create stream\nI0102 20:38:00.077021    1828 log.go:172] (0xc0008522c0) (0xc0007d2dc0) Stream added, broadcasting: 3\nI0102 20:38:00.078245    1828 log.go:172] (0xc0008522c0) Reply frame received for 3\nI0102 20:38:00.078346    1828 log.go:172] (0xc0008522c0) (0xc0009406e0) Create stream\nI0102 20:38:00.078373    1828 log.go:172] (0xc0008522c0) (0xc0009406e0) Stream added, broadcasting: 5\nI0102 20:38:00.079332    1828 log.go:172] (0xc0008522c0) Reply frame received for 5\nI0102 20:38:00.309290    1828 log.go:172] (0xc0008522c0) Data frame received for 3\nI0102 20:38:00.309460    1828 log.go:172] (0xc0007d2dc0) (3) Data frame handling\nI0102 20:38:00.309507    1828 log.go:172] (0xc0007d2dc0) (3) Data frame sent\nI0102 20:38:00.723798    1828 log.go:172] (0xc0008522c0) Data frame received for 1\nI0102 20:38:00.723942    1828 log.go:172] (0xc0008522c0) (0xc0007d2dc0) Stream removed, broadcasting: 3\nI0102 20:38:00.724082    1828 log.go:172] (0xc000940640) (1) Data frame handling\nI0102 20:38:00.724116    1828 log.go:172] (0xc000940640) (1) Data frame sent\nI0102 20:38:00.724160    1828 log.go:172] (0xc0008522c0) (0xc0009406e0) Stream removed, broadcasting: 5\nI0102 20:38:00.724188    1828 log.go:172] (0xc0008522c0) (0xc000940640) Stream removed, broadcasting: 1\nI0102 20:38:00.724231    1828 log.go:172] (0xc0008522c0) Go away received\nI0102 20:38:00.725033    1828 log.go:172] (0xc0008522c0) (0xc000940640) Stream removed, broadcasting: 1\nI0102 20:38:00.725064    1828 log.go:172] (0xc0008522c0) (0xc0007d2dc0) Stream removed, broadcasting: 3\nI0102 20:38:00.725096    1828 log.go:172] (0xc0008522c0) (0xc0009406e0) Stream removed, broadcasting: 5\n"
Jan  2 20:38:00.737: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 20:38:00.737: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 20:38:00.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:38:01.004: INFO: rc: 1
Jan  2 20:38:01.004: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001c65860 exit status 1   true [0xc001430be8 0xc001430c00 0xc001430c18] [0xc001430be8 0xc001430c00 0xc001430c18] [0xc001430bf8 0xc001430c10] [0x935700 0x935700] 0xc001dee180 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan  2 20:38:11.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:38:11.670: INFO: stderr: "I0102 20:38:11.399179    1870 log.go:172] (0xc0007f6420) (0xc0006e8640) Create stream\nI0102 20:38:11.399492    1870 log.go:172] (0xc0007f6420) (0xc0006e8640) Stream added, broadcasting: 1\nI0102 20:38:11.405047    1870 log.go:172] (0xc0007f6420) Reply frame received for 1\nI0102 20:38:11.405084    1870 log.go:172] (0xc0007f6420) (0xc0006e86e0) Create stream\nI0102 20:38:11.405092    1870 log.go:172] (0xc0007f6420) (0xc0006e86e0) Stream added, broadcasting: 3\nI0102 20:38:11.405953    1870 log.go:172] (0xc0007f6420) Reply frame received for 3\nI0102 20:38:11.405977    1870 log.go:172] (0xc0007f6420) (0xc0005a8d20) Create stream\nI0102 20:38:11.405986    1870 log.go:172] (0xc0007f6420) (0xc0005a8d20) Stream added, broadcasting: 5\nI0102 20:38:11.407199    1870 log.go:172] (0xc0007f6420) Reply frame received for 5\nI0102 20:38:11.517084    1870 log.go:172] (0xc0007f6420) Data frame received for 3\nI0102 20:38:11.517186    1870 log.go:172] (0xc0006e86e0) (3) Data frame handling\nI0102 20:38:11.517212    1870 log.go:172] (0xc0006e86e0) (3) Data frame sent\nI0102 20:38:11.517246    1870 log.go:172] (0xc0007f6420) Data frame received for 5\nI0102 20:38:11.517279    1870 log.go:172] (0xc0005a8d20) (5) Data frame handling\nI0102 20:38:11.517313    1870 log.go:172] (0xc0005a8d20) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0102 20:38:11.654469    1870 log.go:172] (0xc0007f6420) Data frame received for 1\nI0102 20:38:11.654622    1870 log.go:172] (0xc0006e8640) (1) Data frame handling\nI0102 20:38:11.654660    1870 log.go:172] (0xc0007f6420) (0xc0006e86e0) Stream removed, broadcasting: 3\nI0102 20:38:11.654760    1870 log.go:172] (0xc0006e8640) (1) Data frame sent\nI0102 20:38:11.654788    1870 log.go:172] (0xc0007f6420) (0xc0006e8640) Stream removed, broadcasting: 1\nI0102 20:38:11.655552    1870 log.go:172] (0xc0007f6420) (0xc0005a8d20) Stream removed, broadcasting: 5\nI0102 20:38:11.655601    1870 log.go:172] (0xc0007f6420) Go away received\nI0102 20:38:11.655791    1870 log.go:172] (0xc0007f6420) (0xc0006e8640) Stream removed, broadcasting: 1\nI0102 20:38:11.656090    1870 log.go:172] (0xc0007f6420) (0xc0006e86e0) Stream removed, broadcasting: 3\nI0102 20:38:11.656201    1870 log.go:172] (0xc0007f6420) (0xc0005a8d20) Stream removed, broadcasting: 5\n"
Jan  2 20:38:11.671: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 20:38:11.671: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 20:38:11.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:38:12.128: INFO: stderr: "I0102 20:38:11.886997    1891 log.go:172] (0xc00073a370) (0xc000760640) Create stream\nI0102 20:38:11.887261    1891 log.go:172] (0xc00073a370) (0xc000760640) Stream added, broadcasting: 1\nI0102 20:38:11.891922    1891 log.go:172] (0xc00073a370) Reply frame received for 1\nI0102 20:38:11.891978    1891 log.go:172] (0xc00073a370) (0xc000666e60) Create stream\nI0102 20:38:11.891990    1891 log.go:172] (0xc00073a370) (0xc000666e60) Stream added, broadcasting: 3\nI0102 20:38:11.893173    1891 log.go:172] (0xc00073a370) Reply frame received for 3\nI0102 20:38:11.893192    1891 log.go:172] (0xc00073a370) (0xc0003aa000) Create stream\nI0102 20:38:11.893201    1891 log.go:172] (0xc00073a370) (0xc0003aa000) Stream added, broadcasting: 5\nI0102 20:38:11.894331    1891 log.go:172] (0xc00073a370) Reply frame received for 5\nI0102 20:38:11.996652    1891 log.go:172] (0xc00073a370) Data frame received for 3\nI0102 20:38:11.996787    1891 log.go:172] (0xc000666e60) (3) Data frame handling\nI0102 20:38:11.996805    1891 log.go:172] (0xc000666e60) (3) Data frame sent\nI0102 20:38:11.996875    1891 log.go:172] (0xc00073a370) Data frame received for 5\nI0102 20:38:11.996882    1891 log.go:172] (0xc0003aa000) (5) Data frame handling\nI0102 20:38:11.996898    1891 log.go:172] (0xc0003aa000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0102 20:38:12.116864    1891 log.go:172] (0xc00073a370) (0xc000666e60) Stream removed, broadcasting: 3\nI0102 20:38:12.117330    1891 log.go:172] (0xc00073a370) Data frame received for 1\nI0102 20:38:12.117367    1891 log.go:172] (0xc000760640) (1) Data frame handling\nI0102 20:38:12.117386    1891 log.go:172] (0xc000760640) (1) Data frame sent\nI0102 20:38:12.117431    1891 log.go:172] (0xc00073a370) (0xc000760640) Stream removed, broadcasting: 1\nI0102 20:38:12.117625    1891 log.go:172] (0xc00073a370) (0xc0003aa000) Stream removed, broadcasting: 5\nI0102 20:38:12.117811    1891 log.go:172] (0xc00073a370) Go away received\nI0102 20:38:12.118242    1891 log.go:172] (0xc00073a370) (0xc000760640) Stream removed, broadcasting: 1\nI0102 20:38:12.118281    1891 log.go:172] (0xc00073a370) (0xc000666e60) Stream removed, broadcasting: 3\nI0102 20:38:12.118302    1891 log.go:172] (0xc00073a370) (0xc0003aa000) Stream removed, broadcasting: 5\n"
Jan  2 20:38:12.128: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 20:38:12.128: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 20:38:12.156: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 20:38:12.156: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 20:38:12.156: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan  2 20:38:12.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 20:38:12.658: INFO: stderr: "I0102 20:38:12.348571    1913 log.go:172] (0xc0007de2c0) (0xc0006da640) Create stream\nI0102 20:38:12.348825    1913 log.go:172] (0xc0007de2c0) (0xc0006da640) Stream added, broadcasting: 1\nI0102 20:38:12.355428    1913 log.go:172] (0xc0007de2c0) Reply frame received for 1\nI0102 20:38:12.355512    1913 log.go:172] (0xc0007de2c0) (0xc000632be0) Create stream\nI0102 20:38:12.355522    1913 log.go:172] (0xc0007de2c0) (0xc000632be0) Stream added, broadcasting: 3\nI0102 20:38:12.357338    1913 log.go:172] (0xc0007de2c0) Reply frame received for 3\nI0102 20:38:12.357389    1913 log.go:172] (0xc0007de2c0) (0xc000686000) Create stream\nI0102 20:38:12.357420    1913 log.go:172] (0xc0007de2c0) (0xc000686000) Stream added, broadcasting: 5\nI0102 20:38:12.358301    1913 log.go:172] (0xc0007de2c0) Reply frame received for 5\nI0102 20:38:12.496036    1913 log.go:172] (0xc0007de2c0) Data frame received for 3\nI0102 20:38:12.496230    1913 log.go:172] (0xc000632be0) (3) Data frame handling\nI0102 20:38:12.496269    1913 log.go:172] (0xc000632be0) (3) Data frame sent\nI0102 20:38:12.647192    1913 log.go:172] (0xc0007de2c0) Data frame received for 1\nI0102 20:38:12.647332    1913 log.go:172] (0xc0007de2c0) (0xc000632be0) Stream removed, broadcasting: 3\nI0102 20:38:12.647433    1913 log.go:172] (0xc0006da640) (1) Data frame handling\nI0102 20:38:12.647492    1913 log.go:172] (0xc0006da640) (1) Data frame sent\nI0102 20:38:12.647499    1913 log.go:172] (0xc0007de2c0) (0xc0006da640) Stream removed, broadcasting: 1\nI0102 20:38:12.648593    1913 log.go:172] (0xc0007de2c0) (0xc000686000) Stream removed, broadcasting: 5\nI0102 20:38:12.648668    1913 log.go:172] (0xc0007de2c0) Go away received\nI0102 20:38:12.649323    1913 log.go:172] (0xc0007de2c0) (0xc0006da640) Stream removed, broadcasting: 1\nI0102 20:38:12.649332    1913 log.go:172] (0xc0007de2c0) (0xc000632be0) Stream removed, broadcasting: 3\nI0102 20:38:12.649336    1913 log.go:172] (0xc0007de2c0) (0xc000686000) Stream removed, broadcasting: 5\n"
Jan  2 20:38:12.658: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 20:38:12.658: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 20:38:12.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 20:38:13.091: INFO: stderr: "I0102 20:38:12.815113    1935 log.go:172] (0xc00013a6e0) (0xc000593360) Create stream\nI0102 20:38:12.815355    1935 log.go:172] (0xc00013a6e0) (0xc000593360) Stream added, broadcasting: 1\nI0102 20:38:12.821085    1935 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0102 20:38:12.821112    1935 log.go:172] (0xc00013a6e0) (0xc000593400) Create stream\nI0102 20:38:12.821120    1935 log.go:172] (0xc00013a6e0) (0xc000593400) Stream added, broadcasting: 3\nI0102 20:38:12.822119    1935 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0102 20:38:12.822142    1935 log.go:172] (0xc00013a6e0) (0xc0006da000) Create stream\nI0102 20:38:12.822148    1935 log.go:172] (0xc00013a6e0) (0xc0006da000) Stream added, broadcasting: 5\nI0102 20:38:12.822970    1935 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0102 20:38:12.973416    1935 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0102 20:38:12.973510    1935 log.go:172] (0xc000593400) (3) Data frame handling\nI0102 20:38:12.973531    1935 log.go:172] (0xc000593400) (3) Data frame sent\nI0102 20:38:13.080410    1935 log.go:172] (0xc00013a6e0) (0xc000593400) Stream removed, broadcasting: 3\nI0102 20:38:13.080595    1935 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0102 20:38:13.080608    1935 log.go:172] (0xc000593360) (1) Data frame handling\nI0102 20:38:13.080616    1935 log.go:172] (0xc000593360) (1) Data frame sent\nI0102 20:38:13.080699    1935 log.go:172] (0xc00013a6e0) (0xc000593360) Stream removed, broadcasting: 1\nI0102 20:38:13.081115    1935 log.go:172] (0xc00013a6e0) (0xc0006da000) Stream removed, broadcasting: 5\nI0102 20:38:13.081177    1935 log.go:172] (0xc00013a6e0) (0xc000593360) Stream removed, broadcasting: 1\nI0102 20:38:13.081183    1935 log.go:172] (0xc00013a6e0) (0xc000593400) Stream removed, broadcasting: 3\nI0102 20:38:13.081186    1935 log.go:172] (0xc00013a6e0) (0xc0006da000) Stream removed, broadcasting: 5\n"
Jan  2 20:38:13.091: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 20:38:13.091: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 20:38:13.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 20:38:13.643: INFO: stderr: "I0102 20:38:13.368542    1957 log.go:172] (0xc000138790) (0xc0006915e0) Create stream\nI0102 20:38:13.368786    1957 log.go:172] (0xc000138790) (0xc0006915e0) Stream added, broadcasting: 1\nI0102 20:38:13.373287    1957 log.go:172] (0xc000138790) Reply frame received for 1\nI0102 20:38:13.373328    1957 log.go:172] (0xc000138790) (0xc0003701e0) Create stream\nI0102 20:38:13.373335    1957 log.go:172] (0xc000138790) (0xc0003701e0) Stream added, broadcasting: 3\nI0102 20:38:13.374095    1957 log.go:172] (0xc000138790) Reply frame received for 3\nI0102 20:38:13.374111    1957 log.go:172] (0xc000138790) (0xc000691680) Create stream\nI0102 20:38:13.374116    1957 log.go:172] (0xc000138790) (0xc000691680) Stream added, broadcasting: 5\nI0102 20:38:13.374944    1957 log.go:172] (0xc000138790) Reply frame received for 5\nI0102 20:38:13.524433    1957 log.go:172] (0xc000138790) Data frame received for 3\nI0102 20:38:13.524554    1957 log.go:172] (0xc0003701e0) (3) Data frame handling\nI0102 20:38:13.524574    1957 log.go:172] (0xc0003701e0) (3) Data frame sent\nI0102 20:38:13.632460    1957 log.go:172] (0xc000138790) Data frame received for 1\nI0102 20:38:13.632601    1957 log.go:172] (0xc000138790) (0xc0003701e0) Stream removed, broadcasting: 3\nI0102 20:38:13.632667    1957 log.go:172] (0xc0006915e0) (1) Data frame handling\nI0102 20:38:13.632679    1957 log.go:172] (0xc0006915e0) (1) Data frame sent\nI0102 20:38:13.632688    1957 log.go:172] (0xc000138790) (0xc0006915e0) Stream removed, broadcasting: 1\nI0102 20:38:13.633102    1957 log.go:172] (0xc000138790) (0xc000691680) Stream removed, broadcasting: 5\nI0102 20:38:13.633136    1957 log.go:172] (0xc000138790) (0xc0006915e0) Stream removed, broadcasting: 1\nI0102 20:38:13.633151    1957 log.go:172] (0xc000138790) (0xc0003701e0) Stream removed, broadcasting: 3\nI0102 20:38:13.633160    1957 log.go:172] (0xc000138790) (0xc000691680) Stream removed, broadcasting: 5\nI0102 20:38:13.633403    1957 log.go:172] (0xc000138790) Go away received\n"
Jan  2 20:38:13.644: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 20:38:13.644: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 20:38:13.644: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 20:38:13.656: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan  2 20:38:23.697: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 20:38:23.698: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 20:38:23.698: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 20:38:23.765: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 20:38:23.765: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  }]
Jan  2 20:38:23.766: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:23.766: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:23.766: INFO: 
Jan  2 20:38:23.766: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 20:38:26.713: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 20:38:26.713: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  }]
Jan  2 20:38:26.713: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:26.713: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:26.713: INFO: 
Jan  2 20:38:26.713: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 20:38:27.770: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 20:38:27.771: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  }]
Jan  2 20:38:27.771: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:27.771: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:27.771: INFO: 
Jan  2 20:38:27.771: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 20:38:29.924: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 20:38:29.924: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  }]
Jan  2 20:38:29.925: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:29.925: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:29.925: INFO: 
Jan  2 20:38:29.925: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 20:38:30.949: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 20:38:30.949: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  }]
Jan  2 20:38:30.950: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:30.950: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:30.950: INFO: 
Jan  2 20:38:30.950: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 20:38:31.972: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 20:38:31.972: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:37 +0000 UTC  }]
Jan  2 20:38:31.972: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:31.973: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:31.973: INFO: 
Jan  2 20:38:31.973: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 20:38:33.076: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 20:38:33.076: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:33.076: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:38:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 20:37:48 +0000 UTC  }]
Jan  2 20:38:33.076: INFO: 
Jan  2 20:38:33.076: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-p8b82
Jan  2 20:38:34.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:38:34.470: INFO: rc: 1
Jan  2 20:38:34.471: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0027bb2f0 exit status 1   true [0xc001430d30 0xc001430d48 0xc001430d60] [0xc001430d30 0xc001430d48 0xc001430d60] [0xc001430d40 0xc001430d58] [0x935700 0x935700] 0xc001def200 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan  2 20:38:44.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:38:44.656: INFO: rc: 1
Jan  2 20:38:44.657: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0004f94a0 exit status 1   true [0xc001430000 0xc001430018 0xc001430030] [0xc001430000 0xc001430018 0xc001430030] [0xc001430010 0xc001430028] [0x935700 0x935700] 0xc0027230e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:38:54.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:38:54.798: INFO: rc: 1
Jan  2 20:38:54.798: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0004f95f0 exit status 1   true [0xc001430038 0xc001430050 0xc001430068] [0xc001430038 0xc001430050 0xc001430068] [0xc001430048 0xc001430060] [0x935700 0x935700] 0xc002723380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:39:04.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:39:05.031: INFO: rc: 1
Jan  2 20:39:05.032: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0004f9770 exit status 1   true [0xc001430070 0xc001430088 0xc0014300a0] [0xc001430070 0xc001430088 0xc0014300a0] [0xc001430080 0xc001430098] [0x935700 0x935700] 0xc002723620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:39:15.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:39:15.220: INFO: rc: 1
Jan  2 20:39:15.220: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0004f9ad0 exit status 1   true [0xc0014300a8 0xc0014300c0 0xc0014300d8] [0xc0014300a8 0xc0014300c0 0xc0014300d8] [0xc0014300b8 0xc0014300d0] [0x935700 0x935700] 0xc0027238c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:39:25.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:39:25.359: INFO: rc: 1
Jan  2 20:39:25.359: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0004f9e60 exit status 1   true [0xc0014300e0 0xc0014300f8 0xc001430110] [0xc0014300e0 0xc0014300f8 0xc001430110] [0xc0014300f0 0xc001430108] [0x935700 0x935700] 0xc002723b60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:39:35.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:39:35.564: INFO: rc: 1
Jan  2 20:39:35.564: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00135e120 exit status 1   true [0xc00180a000 0xc00180a018 0xc00180a030] [0xc00180a000 0xc00180a018 0xc00180a030] [0xc00180a010 0xc00180a028] [0x935700 0x935700] 0xc0024621e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:39:45.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:39:45.787: INFO: rc: 1
Jan  2 20:39:45.787: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0004f9f80 exit status 1   true [0xc001430118 0xc001430130 0xc001430148] [0xc001430118 0xc001430130 0xc001430148] [0xc001430128 0xc001430140] [0x935700 0x935700] 0xc002723e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:39:55.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:39:55.975: INFO: rc: 1
Jan  2 20:39:55.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000cac150 exit status 1   true [0xc001430150 0xc001430168 0xc001430180] [0xc001430150 0xc001430168 0xc001430180] [0xc001430160 0xc001430178] [0x935700 0x935700] 0xc0021a4120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:40:05.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:40:06.136: INFO: rc: 1
Jan  2 20:40:06.136: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016e6150 exit status 1   true [0xc00103e060 0xc00103e090 0xc00103e0d0] [0xc00103e060 0xc00103e090 0xc00103e0d0] [0xc00103e080 0xc00103e0b8] [0x935700 0x935700] 0xc001aba1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:40:16.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:40:16.322: INFO: rc: 1
Jan  2 20:40:16.323: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000cac270 exit status 1   true [0xc001430188 0xc0014301a0 0xc0014301b8] [0xc001430188 0xc0014301a0 0xc0014301b8] [0xc001430198 0xc0014301b0] [0x935700 0x935700] 0xc0021a43c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:40:26.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:40:26.542: INFO: rc: 1
Jan  2 20:40:26.542: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00135e2a0 exit status 1   true [0xc00180a038 0xc00180a050 0xc00180a068] [0xc00180a038 0xc00180a050 0xc00180a068] [0xc00180a048 0xc00180a060] [0x935700 0x935700] 0xc002462480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:40:36.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:40:36.685: INFO: rc: 1
Jan  2 20:40:36.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001d5e240 exit status 1   true [0xc0016e2000 0xc0016e2018 0xc0016e2030] [0xc0016e2000 0xc0016e2018 0xc0016e2030] [0xc0016e2010 0xc0016e2028] [0x935700 0x935700] 0xc00153e3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:40:46.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:40:46.872: INFO: rc: 1
Jan  2 20:40:46.872: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0004f94d0 exit status 1   true [0xc0016e2000 0xc0016e2018 0xc0016e2030] [0xc0016e2000 0xc0016e2018 0xc0016e2030] [0xc0016e2010 0xc0016e2028] [0x935700 0x935700] 0xc0027230e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:40:56.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:40:57.017: INFO: rc: 1
Jan  2 20:40:57.017: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001d5e210 exit status 1   true [0xc00103e060 0xc00103e090 0xc00103e0d0] [0xc00103e060 0xc00103e090 0xc00103e0d0] [0xc00103e080 0xc00103e0b8] [0x935700 0x935700] 0xc00153e3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:41:07.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:41:07.155: INFO: rc: 1
Jan  2 20:41:07.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0004f9650 exit status 1   true [0xc0016e2038 0xc0016e2050 0xc0016e2068] [0xc0016e2038 0xc0016e2050 0xc0016e2068] [0xc0016e2048 0xc0016e2060] [0x935700 0x935700] 0xc002723380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:41:17.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:41:17.346: INFO: rc: 1
Jan  2 20:41:17.347: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016e6120 exit status 1   true [0xc00180a000 0xc00180a018 0xc00180a030] [0xc00180a000 0xc00180a018 0xc00180a030] [0xc00180a010 0xc00180a028] [0x935700 0x935700] 0xc001aba1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:41:27.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:41:27.556: INFO: rc: 1
Jan  2 20:41:27.556: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016e6330 exit status 1   true [0xc00180a038 0xc00180a050 0xc00180a068] [0xc00180a038 0xc00180a050 0xc00180a068] [0xc00180a048 0xc00180a060] [0x935700 0x935700] 0xc001aba4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:41:37.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:41:37.744: INFO: rc: 1
Jan  2 20:41:37.744: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016e6510 exit status 1   true [0xc00180a070 0xc00180a088 0xc00180a0a0] [0xc00180a070 0xc00180a088 0xc00180a0a0] [0xc00180a080 0xc00180a098] [0x935700 0x935700] 0xc001aba840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:41:47.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:41:47.911: INFO: rc: 1
Jan  2 20:41:47.912: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001d5e3f0 exit status 1   true [0xc00103e0e0 0xc00103e100 0xc00103e138] [0xc00103e0e0 0xc00103e100 0xc00103e138] [0xc00103e0f0 0xc00103e130] [0x935700 0x935700] 0xc00153efc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:41:57.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:41:58.111: INFO: rc: 1
Jan  2 20:41:58.112: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00135e150 exit status 1   true [0xc001430000 0xc001430018 0xc001430030] [0xc001430000 0xc001430018 0xc001430030] [0xc001430010 0xc001430028] [0x935700 0x935700] 0xc0024621e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:42:08.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:42:08.314: INFO: rc: 1
Jan  2 20:42:08.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0004f9a40 exit status 1   true [0xc0016e2070 0xc0016e2088 0xc0016e20a0] [0xc0016e2070 0xc0016e2088 0xc0016e20a0] [0xc0016e2080 0xc0016e2098] [0x935700 0x935700] 0xc002723620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:42:18.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:42:18.574: INFO: rc: 1
Jan  2 20:42:18.575: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016e6690 exit status 1   true [0xc00180a0a8 0xc00180a0c0 0xc00180a0d8] [0xc00180a0a8 0xc00180a0c0 0xc00180a0d8] [0xc00180a0b8 0xc00180a0d0] [0x935700 0x935700] 0xc001abab40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:42:28.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:42:28.722: INFO: rc: 1
Jan  2 20:42:28.722: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00135e2d0 exit status 1   true [0xc001430038 0xc001430050 0xc001430068] [0xc001430038 0xc001430050 0xc001430068] [0xc001430048 0xc001430060] [0x935700 0x935700] 0xc002462480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:42:38.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:42:38.911: INFO: rc: 1
Jan  2 20:42:38.912: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016e67e0 exit status 1   true [0xc00180a0e0 0xc00180a0f8 0xc00180a110] [0xc00180a0e0 0xc00180a0f8 0xc00180a110] [0xc00180a0f0 0xc00180a108] [0x935700 0x935700] 0xc001abade0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:42:48.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:42:49.033: INFO: rc: 1
Jan  2 20:42:49.034: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00135e120 exit status 1   true [0xc001430000 0xc001430018 0xc001430030] [0xc001430000 0xc001430018 0xc001430030] [0xc001430010 0xc001430028] [0x935700 0x935700] 0xc001aba1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:42:59.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:42:59.180: INFO: rc: 1
Jan  2 20:42:59.180: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016e6150 exit status 1   true [0xc00180a000 0xc00180a018 0xc00180a030] [0xc00180a000 0xc00180a018 0xc00180a030] [0xc00180a010 0xc00180a028] [0x935700 0x935700] 0xc00153e3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:43:09.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:43:09.375: INFO: rc: 1
Jan  2 20:43:09.375: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001d5e240 exit status 1   true [0xc00103e060 0xc00103e090 0xc00103e0d0] [0xc00103e060 0xc00103e090 0xc00103e0d0] [0xc00103e080 0xc00103e0b8] [0x935700 0x935700] 0xc0024621e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:43:19.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:43:19.536: INFO: rc: 1
Jan  2 20:43:19.537: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0004f94a0 exit status 1   true [0xc0016e2000 0xc0016e2018 0xc0016e2030] [0xc0016e2000 0xc0016e2018 0xc0016e2030] [0xc0016e2010 0xc0016e2028] [0x935700 0x935700] 0xc0027230e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:43:29.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:43:29.657: INFO: rc: 1
Jan  2 20:43:29.657: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00135e300 exit status 1   true [0xc001430038 0xc001430050 0xc001430068] [0xc001430038 0xc001430050 0xc001430068] [0xc001430048 0xc001430060] [0x935700 0x935700] 0xc001aba4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jan  2 20:43:39.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-p8b82 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:43:39.826: INFO: rc: 1
Jan  2 20:43:39.827: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Jan  2 20:43:39.827: INFO: Scaling statefulset ss to 0
Jan  2 20:43:39.868: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 20:43:39.872: INFO: Deleting all statefulset in ns e2e-tests-statefulset-p8b82
Jan  2 20:43:39.876: INFO: Scaling statefulset ss to 0
Jan  2 20:43:39.887: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 20:43:39.890: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:43:39.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-p8b82" for this suite.
Jan  2 20:43:48.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:43:48.057: INFO: namespace: e2e-tests-statefulset-p8b82, resource: bindings, ignored listing per whitelist
Jan  2 20:43:48.206: INFO: namespace e2e-tests-statefulset-p8b82 deletion completed in 8.239539906s

• [SLOW TEST:370.935 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:43:48.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 20:43:48.562: INFO: Waiting up to 5m0s for pod "downwardapi-volume-93bae612-2da0-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-9fxvr" to be "success or failure"
Jan  2 20:43:48.594: INFO: Pod "downwardapi-volume-93bae612-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.91458ms
Jan  2 20:43:50.653: INFO: Pod "downwardapi-volume-93bae612-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090630199s
Jan  2 20:43:52.673: INFO: Pod "downwardapi-volume-93bae612-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110670974s
Jan  2 20:43:54.881: INFO: Pod "downwardapi-volume-93bae612-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.319291433s
Jan  2 20:43:56.899: INFO: Pod "downwardapi-volume-93bae612-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.336832351s
Jan  2 20:43:58.973: INFO: Pod "downwardapi-volume-93bae612-2da0-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.41061975s
STEP: Saw pod success
Jan  2 20:43:58.973: INFO: Pod "downwardapi-volume-93bae612-2da0-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:43:58.992: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-93bae612-2da0-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 20:43:59.142: INFO: Waiting for pod downwardapi-volume-93bae612-2da0-11ea-814c-0242ac110005 to disappear
Jan  2 20:43:59.151: INFO: Pod downwardapi-volume-93bae612-2da0-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:43:59.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9fxvr" for this suite.
Jan  2 20:44:05.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:44:05.418: INFO: namespace: e2e-tests-projected-9fxvr, resource: bindings, ignored listing per whitelist
Jan  2 20:44:05.494: INFO: namespace e2e-tests-projected-9fxvr deletion completed in 6.330659166s

• [SLOW TEST:17.288 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:44:05.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-9e0e77ba-2da0-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 20:44:05.933: INFO: Waiting up to 5m0s for pod "pod-secrets-9e1e5874-2da0-11ea-814c-0242ac110005" in namespace "e2e-tests-secrets-mvdrt" to be "success or failure"
Jan  2 20:44:05.938: INFO: Pod "pod-secrets-9e1e5874-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.790061ms
Jan  2 20:44:08.126: INFO: Pod "pod-secrets-9e1e5874-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192948584s
Jan  2 20:44:10.198: INFO: Pod "pod-secrets-9e1e5874-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265804814s
Jan  2 20:44:12.451: INFO: Pod "pod-secrets-9e1e5874-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.518489798s
Jan  2 20:44:14.487: INFO: Pod "pod-secrets-9e1e5874-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55435989s
Jan  2 20:44:16.841: INFO: Pod "pod-secrets-9e1e5874-2da0-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.907948456s
STEP: Saw pod success
Jan  2 20:44:16.841: INFO: Pod "pod-secrets-9e1e5874-2da0-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:44:16.852: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-9e1e5874-2da0-11ea-814c-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 20:44:17.083: INFO: Waiting for pod pod-secrets-9e1e5874-2da0-11ea-814c-0242ac110005 to disappear
Jan  2 20:44:17.116: INFO: Pod pod-secrets-9e1e5874-2da0-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:44:17.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-mvdrt" for this suite.
Jan  2 20:44:23.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:44:23.439: INFO: namespace: e2e-tests-secrets-mvdrt, resource: bindings, ignored listing per whitelist
Jan  2 20:44:23.447: INFO: namespace e2e-tests-secrets-mvdrt deletion completed in 6.281581375s

• [SLOW TEST:17.952 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:44:23.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 20:44:23.862: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  2 20:44:23.905: INFO: Number of nodes with available pods: 0
Jan  2 20:44:23.905: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  2 20:44:24.350: INFO: Number of nodes with available pods: 0
Jan  2 20:44:24.351: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:25.362: INFO: Number of nodes with available pods: 0
Jan  2 20:44:25.362: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:26.443: INFO: Number of nodes with available pods: 0
Jan  2 20:44:26.443: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:27.369: INFO: Number of nodes with available pods: 0
Jan  2 20:44:27.369: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:28.396: INFO: Number of nodes with available pods: 0
Jan  2 20:44:28.396: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:30.287: INFO: Number of nodes with available pods: 0
Jan  2 20:44:30.287: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:30.847: INFO: Number of nodes with available pods: 0
Jan  2 20:44:30.847: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:31.374: INFO: Number of nodes with available pods: 0
Jan  2 20:44:31.374: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:32.493: INFO: Number of nodes with available pods: 0
Jan  2 20:44:32.493: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:33.365: INFO: Number of nodes with available pods: 0
Jan  2 20:44:33.365: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:34.367: INFO: Number of nodes with available pods: 0
Jan  2 20:44:34.367: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:35.391: INFO: Number of nodes with available pods: 1
Jan  2 20:44:35.391: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  2 20:44:35.461: INFO: Number of nodes with available pods: 1
Jan  2 20:44:35.461: INFO: Number of running nodes: 0, number of available pods: 1
Jan  2 20:44:36.519: INFO: Number of nodes with available pods: 0
Jan  2 20:44:36.520: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  2 20:44:36.703: INFO: Number of nodes with available pods: 0
Jan  2 20:44:36.704: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:37.720: INFO: Number of nodes with available pods: 0
Jan  2 20:44:37.720: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:38.765: INFO: Number of nodes with available pods: 0
Jan  2 20:44:38.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:39.718: INFO: Number of nodes with available pods: 0
Jan  2 20:44:39.718: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:40.723: INFO: Number of nodes with available pods: 0
Jan  2 20:44:40.723: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:41.733: INFO: Number of nodes with available pods: 0
Jan  2 20:44:41.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:42.816: INFO: Number of nodes with available pods: 0
Jan  2 20:44:42.816: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:43.718: INFO: Number of nodes with available pods: 0
Jan  2 20:44:43.719: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:44.742: INFO: Number of nodes with available pods: 0
Jan  2 20:44:44.742: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:45.724: INFO: Number of nodes with available pods: 0
Jan  2 20:44:45.724: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:46.738: INFO: Number of nodes with available pods: 0
Jan  2 20:44:46.739: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:48.620: INFO: Number of nodes with available pods: 0
Jan  2 20:44:48.620: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:48.715: INFO: Number of nodes with available pods: 0
Jan  2 20:44:48.716: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:49.743: INFO: Number of nodes with available pods: 0
Jan  2 20:44:49.743: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:50.776: INFO: Number of nodes with available pods: 0
Jan  2 20:44:50.776: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:51.717: INFO: Number of nodes with available pods: 0
Jan  2 20:44:51.717: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:44:52.720: INFO: Number of nodes with available pods: 1
Jan  2 20:44:52.720: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-trzrb, will wait for the garbage collector to delete the pods
Jan  2 20:44:52.808: INFO: Deleting DaemonSet.extensions daemon-set took: 22.437772ms
Jan  2 20:44:52.909: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.451462ms
Jan  2 20:45:12.714: INFO: Number of nodes with available pods: 0
Jan  2 20:45:12.714: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 20:45:12.717: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-trzrb/daemonsets","resourceVersion":"16963002"},"items":null}

Jan  2 20:45:12.720: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-trzrb/pods","resourceVersion":"16963002"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:45:12.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-trzrb" for this suite.
Jan  2 20:45:20.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:45:21.041: INFO: namespace: e2e-tests-daemonsets-trzrb, resource: bindings, ignored listing per whitelist
Jan  2 20:45:21.066: INFO: namespace e2e-tests-daemonsets-trzrb deletion completed in 8.196339149s

• [SLOW TEST:57.619 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:45:21.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  2 20:45:21.380: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fmjx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fmjx9/configmaps/e2e-watch-test-configmap-a,UID:cb0f9b26-2da0-11ea-a994-fa163e34d433,ResourceVersion:16963036,Generation:0,CreationTimestamp:2020-01-02 20:45:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 20:45:21.380: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fmjx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fmjx9/configmaps/e2e-watch-test-configmap-a,UID:cb0f9b26-2da0-11ea-a994-fa163e34d433,ResourceVersion:16963036,Generation:0,CreationTimestamp:2020-01-02 20:45:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  2 20:45:31.416: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fmjx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fmjx9/configmaps/e2e-watch-test-configmap-a,UID:cb0f9b26-2da0-11ea-a994-fa163e34d433,ResourceVersion:16963049,Generation:0,CreationTimestamp:2020-01-02 20:45:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  2 20:45:31.417: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fmjx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fmjx9/configmaps/e2e-watch-test-configmap-a,UID:cb0f9b26-2da0-11ea-a994-fa163e34d433,ResourceVersion:16963049,Generation:0,CreationTimestamp:2020-01-02 20:45:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  2 20:45:41.473: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fmjx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fmjx9/configmaps/e2e-watch-test-configmap-a,UID:cb0f9b26-2da0-11ea-a994-fa163e34d433,ResourceVersion:16963062,Generation:0,CreationTimestamp:2020-01-02 20:45:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 20:45:41.474: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fmjx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fmjx9/configmaps/e2e-watch-test-configmap-a,UID:cb0f9b26-2da0-11ea-a994-fa163e34d433,ResourceVersion:16963062,Generation:0,CreationTimestamp:2020-01-02 20:45:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  2 20:45:51.518: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fmjx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fmjx9/configmaps/e2e-watch-test-configmap-a,UID:cb0f9b26-2da0-11ea-a994-fa163e34d433,ResourceVersion:16963075,Generation:0,CreationTimestamp:2020-01-02 20:45:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 20:45:51.519: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fmjx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fmjx9/configmaps/e2e-watch-test-configmap-a,UID:cb0f9b26-2da0-11ea-a994-fa163e34d433,ResourceVersion:16963075,Generation:0,CreationTimestamp:2020-01-02 20:45:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  2 20:46:01.560: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-fmjx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fmjx9/configmaps/e2e-watch-test-configmap-b,UID:e30997ce-2da0-11ea-a994-fa163e34d433,ResourceVersion:16963088,Generation:0,CreationTimestamp:2020-01-02 20:46:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 20:46:01.561: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-fmjx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fmjx9/configmaps/e2e-watch-test-configmap-b,UID:e30997ce-2da0-11ea-a994-fa163e34d433,ResourceVersion:16963088,Generation:0,CreationTimestamp:2020-01-02 20:46:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  2 20:46:11.612: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-fmjx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fmjx9/configmaps/e2e-watch-test-configmap-b,UID:e30997ce-2da0-11ea-a994-fa163e34d433,ResourceVersion:16963101,Generation:0,CreationTimestamp:2020-01-02 20:46:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 20:46:11.613: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-fmjx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fmjx9/configmaps/e2e-watch-test-configmap-b,UID:e30997ce-2da0-11ea-a994-fa163e34d433,ResourceVersion:16963101,Generation:0,CreationTimestamp:2020-01-02 20:46:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:46:21.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-fmjx9" for this suite.
Jan  2 20:46:27.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:46:27.738: INFO: namespace: e2e-tests-watch-fmjx9, resource: bindings, ignored listing per whitelist
Jan  2 20:46:27.827: INFO: namespace e2e-tests-watch-fmjx9 deletion completed in 6.201071128s

• [SLOW TEST:66.760 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:46:27.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-f2cf6619-2da0-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 20:46:28.077: INFO: Waiting up to 5m0s for pod "pod-secrets-f2d1802a-2da0-11ea-814c-0242ac110005" in namespace "e2e-tests-secrets-v7r59" to be "success or failure"
Jan  2 20:46:28.172: INFO: Pod "pod-secrets-f2d1802a-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 94.925005ms
Jan  2 20:46:30.180: INFO: Pod "pod-secrets-f2d1802a-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10319864s
Jan  2 20:46:32.267: INFO: Pod "pod-secrets-f2d1802a-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189814826s
Jan  2 20:46:34.471: INFO: Pod "pod-secrets-f2d1802a-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394349455s
Jan  2 20:46:36.514: INFO: Pod "pod-secrets-f2d1802a-2da0-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.436797098s
Jan  2 20:46:38.535: INFO: Pod "pod-secrets-f2d1802a-2da0-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.458005403s
STEP: Saw pod success
Jan  2 20:46:38.535: INFO: Pod "pod-secrets-f2d1802a-2da0-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:46:38.548: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f2d1802a-2da0-11ea-814c-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 20:46:39.810: INFO: Waiting for pod pod-secrets-f2d1802a-2da0-11ea-814c-0242ac110005 to disappear
Jan  2 20:46:39.829: INFO: Pod pod-secrets-f2d1802a-2da0-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:46:39.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-v7r59" for this suite.
Jan  2 20:46:45.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:46:46.006: INFO: namespace: e2e-tests-secrets-v7r59, resource: bindings, ignored listing per whitelist
Jan  2 20:46:46.074: INFO: namespace e2e-tests-secrets-v7r59 deletion completed in 6.215500763s

• [SLOW TEST:18.247 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:46:46.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  2 20:46:46.506: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:47:10.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-wnbvb" for this suite.
Jan  2 20:47:36.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:47:36.724: INFO: namespace: e2e-tests-init-container-wnbvb, resource: bindings, ignored listing per whitelist
Jan  2 20:47:36.763: INFO: namespace e2e-tests-init-container-wnbvb deletion completed in 26.271314398s

• [SLOW TEST:50.688 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:47:36.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-wskpk/configmap-test-1be87f8c-2da1-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 20:47:36.969: INFO: Waiting up to 5m0s for pod "pod-configmaps-1beaf8c5-2da1-11ea-814c-0242ac110005" in namespace "e2e-tests-configmap-wskpk" to be "success or failure"
Jan  2 20:47:36.985: INFO: Pod "pod-configmaps-1beaf8c5-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.86368ms
Jan  2 20:47:38.998: INFO: Pod "pod-configmaps-1beaf8c5-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028599115s
Jan  2 20:47:41.021: INFO: Pod "pod-configmaps-1beaf8c5-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051933068s
Jan  2 20:47:43.030: INFO: Pod "pod-configmaps-1beaf8c5-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060979609s
Jan  2 20:47:45.051: INFO: Pod "pod-configmaps-1beaf8c5-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081496837s
Jan  2 20:47:47.075: INFO: Pod "pod-configmaps-1beaf8c5-2da1-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106192727s
STEP: Saw pod success
Jan  2 20:47:47.075: INFO: Pod "pod-configmaps-1beaf8c5-2da1-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:47:47.083: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1beaf8c5-2da1-11ea-814c-0242ac110005 container env-test: 
STEP: delete the pod
Jan  2 20:47:47.166: INFO: Waiting for pod pod-configmaps-1beaf8c5-2da1-11ea-814c-0242ac110005 to disappear
Jan  2 20:47:47.188: INFO: Pod pod-configmaps-1beaf8c5-2da1-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:47:47.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wskpk" for this suite.
Jan  2 20:47:53.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:47:53.440: INFO: namespace: e2e-tests-configmap-wskpk, resource: bindings, ignored listing per whitelist
Jan  2 20:47:53.471: INFO: namespace e2e-tests-configmap-wskpk deletion completed in 6.214004398s

• [SLOW TEST:16.708 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:47:53.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 20:47:53.803: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25d9f80c-2da1-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-xbzwk" to be "success or failure"
Jan  2 20:47:53.943: INFO: Pod "downwardapi-volume-25d9f80c-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 139.534951ms
Jan  2 20:47:55.956: INFO: Pod "downwardapi-volume-25d9f80c-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152814411s
Jan  2 20:47:57.975: INFO: Pod "downwardapi-volume-25d9f80c-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171258283s
Jan  2 20:48:00.481: INFO: Pod "downwardapi-volume-25d9f80c-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.677702796s
Jan  2 20:48:02.509: INFO: Pod "downwardapi-volume-25d9f80c-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.705837476s
Jan  2 20:48:04.536: INFO: Pod "downwardapi-volume-25d9f80c-2da1-11ea-814c-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.732422384s
Jan  2 20:48:06.576: INFO: Pod "downwardapi-volume-25d9f80c-2da1-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.772324882s
STEP: Saw pod success
Jan  2 20:48:06.576: INFO: Pod "downwardapi-volume-25d9f80c-2da1-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:48:06.602: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-25d9f80c-2da1-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 20:48:07.598: INFO: Waiting for pod downwardapi-volume-25d9f80c-2da1-11ea-814c-0242ac110005 to disappear
Jan  2 20:48:07.751: INFO: Pod downwardapi-volume-25d9f80c-2da1-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:48:07.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xbzwk" for this suite.
Jan  2 20:48:13.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:48:13.924: INFO: namespace: e2e-tests-downward-api-xbzwk, resource: bindings, ignored listing per whitelist
Jan  2 20:48:14.016: INFO: namespace e2e-tests-downward-api-xbzwk deletion completed in 6.255067168s

• [SLOW TEST:20.545 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:48:14.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 20:48:14.334: INFO: Waiting up to 5m0s for pod "downwardapi-volume-322dbb60-2da1-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-2hghl" to be "success or failure"
Jan  2 20:48:14.345: INFO: Pod "downwardapi-volume-322dbb60-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.830741ms
Jan  2 20:48:16.421: INFO: Pod "downwardapi-volume-322dbb60-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087028225s
Jan  2 20:48:18.449: INFO: Pod "downwardapi-volume-322dbb60-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115314502s
Jan  2 20:48:21.682: INFO: Pod "downwardapi-volume-322dbb60-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.347963117s
Jan  2 20:48:23.700: INFO: Pod "downwardapi-volume-322dbb60-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.366111724s
Jan  2 20:48:25.716: INFO: Pod "downwardapi-volume-322dbb60-2da1-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.381748265s
STEP: Saw pod success
Jan  2 20:48:25.716: INFO: Pod "downwardapi-volume-322dbb60-2da1-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:48:25.721: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-322dbb60-2da1-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 20:48:25.775: INFO: Waiting for pod downwardapi-volume-322dbb60-2da1-11ea-814c-0242ac110005 to disappear
Jan  2 20:48:25.996: INFO: Pod downwardapi-volume-322dbb60-2da1-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:48:25.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2hghl" for this suite.
Jan  2 20:48:33.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:48:33.296: INFO: namespace: e2e-tests-projected-2hghl, resource: bindings, ignored listing per whitelist
Jan  2 20:48:33.361: INFO: namespace e2e-tests-projected-2hghl deletion completed in 7.334618762s

• [SLOW TEST:19.344 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:48:33.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 20:48:33.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d9febe6-2da1-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-jvjrh" to be "success or failure"
Jan  2 20:48:33.582: INFO: Pod "downwardapi-volume-3d9febe6-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 49.391365ms
Jan  2 20:48:35.607: INFO: Pod "downwardapi-volume-3d9febe6-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074370199s
Jan  2 20:48:37.622: INFO: Pod "downwardapi-volume-3d9febe6-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089263396s
Jan  2 20:48:40.554: INFO: Pod "downwardapi-volume-3d9febe6-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.021751201s
Jan  2 20:48:42.586: INFO: Pod "downwardapi-volume-3d9febe6-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.053663386s
Jan  2 20:48:44.627: INFO: Pod "downwardapi-volume-3d9febe6-2da1-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.09425869s
STEP: Saw pod success
Jan  2 20:48:44.627: INFO: Pod "downwardapi-volume-3d9febe6-2da1-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:48:44.640: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3d9febe6-2da1-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 20:48:45.021: INFO: Waiting for pod downwardapi-volume-3d9febe6-2da1-11ea-814c-0242ac110005 to disappear
Jan  2 20:48:45.035: INFO: Pod downwardapi-volume-3d9febe6-2da1-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:48:45.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jvjrh" for this suite.
Jan  2 20:48:51.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:48:51.168: INFO: namespace: e2e-tests-downward-api-jvjrh, resource: bindings, ignored listing per whitelist
Jan  2 20:48:51.257: INFO: namespace e2e-tests-downward-api-jvjrh deletion completed in 6.211812453s

• [SLOW TEST:17.896 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:48:51.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-4853a3d6-2da1-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 20:48:51.483: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4854833f-2da1-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-95tt4" to be "success or failure"
Jan  2 20:48:51.497: INFO: Pod "pod-projected-secrets-4854833f-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.091352ms
Jan  2 20:48:53.509: INFO: Pod "pod-projected-secrets-4854833f-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025818252s
Jan  2 20:48:55.529: INFO: Pod "pod-projected-secrets-4854833f-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046108723s
Jan  2 20:48:58.090: INFO: Pod "pod-projected-secrets-4854833f-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.606920692s
Jan  2 20:49:00.106: INFO: Pod "pod-projected-secrets-4854833f-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622366373s
Jan  2 20:49:02.147: INFO: Pod "pod-projected-secrets-4854833f-2da1-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.664056554s
STEP: Saw pod success
Jan  2 20:49:02.148: INFO: Pod "pod-projected-secrets-4854833f-2da1-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:49:02.177: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-4854833f-2da1-11ea-814c-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 20:49:02.375: INFO: Waiting for pod pod-projected-secrets-4854833f-2da1-11ea-814c-0242ac110005 to disappear
Jan  2 20:49:02.488: INFO: Pod pod-projected-secrets-4854833f-2da1-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:49:02.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-95tt4" for this suite.
Jan  2 20:49:08.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:49:09.002: INFO: namespace: e2e-tests-projected-95tt4, resource: bindings, ignored listing per whitelist
Jan  2 20:49:09.053: INFO: namespace e2e-tests-projected-95tt4 deletion completed in 6.512296698s

• [SLOW TEST:17.795 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:49:09.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-52ec0360-2da1-11ea-814c-0242ac110005
STEP: Creating secret with name s-test-opt-upd-52ec048f-2da1-11ea-814c-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-52ec0360-2da1-11ea-814c-0242ac110005
STEP: Updating secret s-test-opt-upd-52ec048f-2da1-11ea-814c-0242ac110005
STEP: Creating secret with name s-test-opt-create-52ec050a-2da1-11ea-814c-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:49:31.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-82kzk" for this suite.
Jan  2 20:50:01.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:50:01.648: INFO: namespace: e2e-tests-projected-82kzk, resource: bindings, ignored listing per whitelist
Jan  2 20:50:01.746: INFO: namespace e2e-tests-projected-82kzk deletion completed in 30.195942252s

• [SLOW TEST:52.693 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:50:01.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 20:50:01.952: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7254a32c-2da1-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-9pc5q" to be "success or failure"
Jan  2 20:50:01.967: INFO: Pod "downwardapi-volume-7254a32c-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.125909ms
Jan  2 20:50:04.249: INFO: Pod "downwardapi-volume-7254a32c-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296918204s
Jan  2 20:50:06.265: INFO: Pod "downwardapi-volume-7254a32c-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312915446s
Jan  2 20:50:08.281: INFO: Pod "downwardapi-volume-7254a32c-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.329499551s
Jan  2 20:50:11.414: INFO: Pod "downwardapi-volume-7254a32c-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.462345643s
Jan  2 20:50:13.447: INFO: Pod "downwardapi-volume-7254a32c-2da1-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.495303732s
Jan  2 20:50:15.461: INFO: Pod "downwardapi-volume-7254a32c-2da1-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.509252495s
STEP: Saw pod success
Jan  2 20:50:15.461: INFO: Pod "downwardapi-volume-7254a32c-2da1-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:50:15.468: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7254a32c-2da1-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 20:50:15.556: INFO: Waiting for pod downwardapi-volume-7254a32c-2da1-11ea-814c-0242ac110005 to disappear
Jan  2 20:50:15.659: INFO: Pod downwardapi-volume-7254a32c-2da1-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:50:15.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9pc5q" for this suite.
Jan  2 20:50:21.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:50:21.984: INFO: namespace: e2e-tests-downward-api-9pc5q, resource: bindings, ignored listing per whitelist
Jan  2 20:50:22.086: INFO: namespace e2e-tests-downward-api-9pc5q deletion completed in 6.241064705s

• [SLOW TEST:20.339 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:50:22.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-6gbnd
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan  2 20:50:22.347: INFO: Found 0 stateful pods, waiting for 3
Jan  2 20:50:32.356: INFO: Found 1 stateful pods, waiting for 3
Jan  2 20:50:42.365: INFO: Found 2 stateful pods, waiting for 3
Jan  2 20:50:52.545: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 20:50:52.545: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 20:50:52.545: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 20:51:02.374: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 20:51:02.374: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 20:51:02.374: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 20:51:02.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6gbnd ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 20:51:03.500: INFO: stderr: "I0102 20:51:02.818454    2605 log.go:172] (0xc0007a02c0) (0xc00063f2c0) Create stream\nI0102 20:51:02.818715    2605 log.go:172] (0xc0007a02c0) (0xc00063f2c0) Stream added, broadcasting: 1\nI0102 20:51:02.827583    2605 log.go:172] (0xc0007a02c0) Reply frame received for 1\nI0102 20:51:02.827670    2605 log.go:172] (0xc0007a02c0) (0xc0006f2000) Create stream\nI0102 20:51:02.827686    2605 log.go:172] (0xc0007a02c0) (0xc0006f2000) Stream added, broadcasting: 3\nI0102 20:51:02.830098    2605 log.go:172] (0xc0007a02c0) Reply frame received for 3\nI0102 20:51:02.830241    2605 log.go:172] (0xc0007a02c0) (0xc00063f360) Create stream\nI0102 20:51:02.830250    2605 log.go:172] (0xc0007a02c0) (0xc00063f360) Stream added, broadcasting: 5\nI0102 20:51:02.831433    2605 log.go:172] (0xc0007a02c0) Reply frame received for 5\nI0102 20:51:03.333437    2605 log.go:172] (0xc0007a02c0) Data frame received for 3\nI0102 20:51:03.333615    2605 log.go:172] (0xc0006f2000) (3) Data frame handling\nI0102 20:51:03.333642    2605 log.go:172] (0xc0006f2000) (3) Data frame sent\nI0102 20:51:03.487694    2605 log.go:172] (0xc0007a02c0) Data frame received for 1\nI0102 20:51:03.487901    2605 log.go:172] (0xc0007a02c0) (0xc0006f2000) Stream removed, broadcasting: 3\nI0102 20:51:03.487956    2605 log.go:172] (0xc00063f2c0) (1) Data frame handling\nI0102 20:51:03.487972    2605 log.go:172] (0xc00063f2c0) (1) Data frame sent\nI0102 20:51:03.488152    2605 log.go:172] (0xc0007a02c0) (0xc00063f2c0) Stream removed, broadcasting: 1\nI0102 20:51:03.488289    2605 log.go:172] (0xc0007a02c0) (0xc00063f360) Stream removed, broadcasting: 5\nI0102 20:51:03.488331    2605 log.go:172] (0xc0007a02c0) Go away received\nI0102 20:51:03.489010    2605 log.go:172] (0xc0007a02c0) (0xc00063f2c0) Stream removed, broadcasting: 1\nI0102 20:51:03.489063    2605 log.go:172] (0xc0007a02c0) (0xc0006f2000) Stream removed, broadcasting: 3\nI0102 20:51:03.489077    2605 log.go:172] (0xc0007a02c0) (0xc00063f360) Stream removed, broadcasting: 5\n"
Jan  2 20:51:03.500: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 20:51:03.500: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  2 20:51:13.637: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  2 20:51:24.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6gbnd ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:51:24.766: INFO: stderr: "I0102 20:51:24.367058    2627 log.go:172] (0xc00015c840) (0xc000641400) Create stream\nI0102 20:51:24.367667    2627 log.go:172] (0xc00015c840) (0xc000641400) Stream added, broadcasting: 1\nI0102 20:51:24.375762    2627 log.go:172] (0xc00015c840) Reply frame received for 1\nI0102 20:51:24.375834    2627 log.go:172] (0xc00015c840) (0xc000790000) Create stream\nI0102 20:51:24.375841    2627 log.go:172] (0xc00015c840) (0xc000790000) Stream added, broadcasting: 3\nI0102 20:51:24.377509    2627 log.go:172] (0xc00015c840) Reply frame received for 3\nI0102 20:51:24.377539    2627 log.go:172] (0xc00015c840) (0xc00063c000) Create stream\nI0102 20:51:24.377552    2627 log.go:172] (0xc00015c840) (0xc00063c000) Stream added, broadcasting: 5\nI0102 20:51:24.378459    2627 log.go:172] (0xc00015c840) Reply frame received for 5\nI0102 20:51:24.534773    2627 log.go:172] (0xc00015c840) Data frame received for 3\nI0102 20:51:24.535161    2627 log.go:172] (0xc000790000) (3) Data frame handling\nI0102 20:51:24.535237    2627 log.go:172] (0xc000790000) (3) Data frame sent\nI0102 20:51:24.752592    2627 log.go:172] (0xc00015c840) (0xc000790000) Stream removed, broadcasting: 3\nI0102 20:51:24.752901    2627 log.go:172] (0xc00015c840) Data frame received for 1\nI0102 20:51:24.752919    2627 log.go:172] (0xc000641400) (1) Data frame handling\nI0102 20:51:24.752934    2627 log.go:172] (0xc000641400) (1) Data frame sent\nI0102 20:51:24.752942    2627 log.go:172] (0xc00015c840) (0xc000641400) Stream removed, broadcasting: 1\nI0102 20:51:24.753725    2627 log.go:172] (0xc00015c840) (0xc00063c000) Stream removed, broadcasting: 5\nI0102 20:51:24.753816    2627 log.go:172] (0xc00015c840) (0xc000641400) Stream removed, broadcasting: 1\nI0102 20:51:24.753827    2627 log.go:172] (0xc00015c840) (0xc000790000) Stream removed, broadcasting: 3\nI0102 20:51:24.753835    2627 log.go:172] (0xc00015c840) (0xc00063c000) Stream removed, broadcasting: 5\nI0102 20:51:24.753911    2627 log.go:172] (0xc00015c840) Go away received\n"
Jan  2 20:51:24.767: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 20:51:24.767: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 20:51:24.871: INFO: Waiting for StatefulSet e2e-tests-statefulset-6gbnd/ss2 to complete update
Jan  2 20:51:24.871: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 20:51:24.871: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 20:51:24.871: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 20:51:34.899: INFO: Waiting for StatefulSet e2e-tests-statefulset-6gbnd/ss2 to complete update
Jan  2 20:51:34.899: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 20:51:34.899: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 20:51:44.898: INFO: Waiting for StatefulSet e2e-tests-statefulset-6gbnd/ss2 to complete update
Jan  2 20:51:44.898: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 20:51:44.898: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 20:51:54.982: INFO: Waiting for StatefulSet e2e-tests-statefulset-6gbnd/ss2 to complete update
Jan  2 20:51:54.983: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 20:52:05.039: INFO: Waiting for StatefulSet e2e-tests-statefulset-6gbnd/ss2 to complete update
Jan  2 20:52:05.039: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 20:52:16.456: INFO: Waiting for StatefulSet e2e-tests-statefulset-6gbnd/ss2 to complete update
STEP: Rolling back to a previous revision
Jan  2 20:52:24.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6gbnd ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 20:52:25.584: INFO: stderr: "I0102 20:52:25.127809    2649 log.go:172] (0xc000138790) (0xc000597360) Create stream\nI0102 20:52:25.128330    2649 log.go:172] (0xc000138790) (0xc000597360) Stream added, broadcasting: 1\nI0102 20:52:25.137492    2649 log.go:172] (0xc000138790) Reply frame received for 1\nI0102 20:52:25.137551    2649 log.go:172] (0xc000138790) (0xc0007a8000) Create stream\nI0102 20:52:25.137560    2649 log.go:172] (0xc000138790) (0xc0007a8000) Stream added, broadcasting: 3\nI0102 20:52:25.142754    2649 log.go:172] (0xc000138790) Reply frame received for 3\nI0102 20:52:25.142821    2649 log.go:172] (0xc000138790) (0xc000514000) Create stream\nI0102 20:52:25.142842    2649 log.go:172] (0xc000138790) (0xc000514000) Stream added, broadcasting: 5\nI0102 20:52:25.143711    2649 log.go:172] (0xc000138790) Reply frame received for 5\nI0102 20:52:25.424108    2649 log.go:172] (0xc000138790) Data frame received for 3\nI0102 20:52:25.424202    2649 log.go:172] (0xc0007a8000) (3) Data frame handling\nI0102 20:52:25.424217    2649 log.go:172] (0xc0007a8000) (3) Data frame sent\nI0102 20:52:25.568767    2649 log.go:172] (0xc000138790) Data frame received for 1\nI0102 20:52:25.568991    2649 log.go:172] (0xc000138790) (0xc0007a8000) Stream removed, broadcasting: 3\nI0102 20:52:25.569163    2649 log.go:172] (0xc000597360) (1) Data frame handling\nI0102 20:52:25.569203    2649 log.go:172] (0xc000597360) (1) Data frame sent\nI0102 20:52:25.569226    2649 log.go:172] (0xc000138790) (0xc000514000) Stream removed, broadcasting: 5\nI0102 20:52:25.569327    2649 log.go:172] (0xc000138790) (0xc000597360) Stream removed, broadcasting: 1\nI0102 20:52:25.569379    2649 log.go:172] (0xc000138790) Go away received\nI0102 20:52:25.570524    2649 log.go:172] (0xc000138790) (0xc000597360) Stream removed, broadcasting: 1\nI0102 20:52:25.570584    2649 log.go:172] (0xc000138790) (0xc0007a8000) Stream removed, broadcasting: 3\nI0102 20:52:25.570594    2649 log.go:172] (0xc000138790) (0xc000514000) Stream removed, broadcasting: 5\n"
Jan  2 20:52:25.584: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 20:52:25.584: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 20:52:35.660: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  2 20:52:45.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6gbnd ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 20:52:46.351: INFO: stderr: "I0102 20:52:46.035587    2671 log.go:172] (0xc00071e370) (0xc000796640) Create stream\nI0102 20:52:46.035954    2671 log.go:172] (0xc00071e370) (0xc000796640) Stream added, broadcasting: 1\nI0102 20:52:46.044357    2671 log.go:172] (0xc00071e370) Reply frame received for 1\nI0102 20:52:46.044477    2671 log.go:172] (0xc00071e370) (0xc000656c80) Create stream\nI0102 20:52:46.044491    2671 log.go:172] (0xc00071e370) (0xc000656c80) Stream added, broadcasting: 3\nI0102 20:52:46.045735    2671 log.go:172] (0xc00071e370) Reply frame received for 3\nI0102 20:52:46.045766    2671 log.go:172] (0xc00071e370) (0xc0007966e0) Create stream\nI0102 20:52:46.045780    2671 log.go:172] (0xc00071e370) (0xc0007966e0) Stream added, broadcasting: 5\nI0102 20:52:46.046971    2671 log.go:172] (0xc00071e370) Reply frame received for 5\nI0102 20:52:46.176711    2671 log.go:172] (0xc00071e370) Data frame received for 3\nI0102 20:52:46.176823    2671 log.go:172] (0xc000656c80) (3) Data frame handling\nI0102 20:52:46.176858    2671 log.go:172] (0xc000656c80) (3) Data frame sent\nI0102 20:52:46.335628    2671 log.go:172] (0xc00071e370) Data frame received for 1\nI0102 20:52:46.335818    2671 log.go:172] (0xc00071e370) (0xc0007966e0) Stream removed, broadcasting: 5\nI0102 20:52:46.335877    2671 log.go:172] (0xc000796640) (1) Data frame handling\nI0102 20:52:46.335904    2671 log.go:172] (0xc000796640) (1) Data frame sent\nI0102 20:52:46.336048    2671 log.go:172] (0xc00071e370) (0xc000656c80) Stream removed, broadcasting: 3\nI0102 20:52:46.336104    2671 log.go:172] (0xc00071e370) (0xc000796640) Stream removed, broadcasting: 1\nI0102 20:52:46.336117    2671 log.go:172] (0xc00071e370) Go away received\nI0102 20:52:46.336826    2671 log.go:172] (0xc00071e370) (0xc000796640) Stream removed, broadcasting: 1\nI0102 20:52:46.336854    2671 log.go:172] (0xc00071e370) (0xc000656c80) Stream removed, broadcasting: 3\nI0102 20:52:46.336866    2671 log.go:172] (0xc00071e370) (0xc0007966e0) Stream removed, broadcasting: 5\n"
Jan  2 20:52:46.351: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 20:52:46.351: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 20:52:56.437: INFO: Waiting for StatefulSet e2e-tests-statefulset-6gbnd/ss2 to complete update
Jan  2 20:52:56.437: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 20:52:56.437: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 20:53:07.141: INFO: Waiting for StatefulSet e2e-tests-statefulset-6gbnd/ss2 to complete update
Jan  2 20:53:07.142: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 20:53:07.142: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 20:53:16.494: INFO: Waiting for StatefulSet e2e-tests-statefulset-6gbnd/ss2 to complete update
Jan  2 20:53:16.494: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 20:53:16.494: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 20:53:26.471: INFO: Waiting for StatefulSet e2e-tests-statefulset-6gbnd/ss2 to complete update
Jan  2 20:53:26.471: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 20:53:36.520: INFO: Waiting for StatefulSet e2e-tests-statefulset-6gbnd/ss2 to complete update
Jan  2 20:53:36.521: INFO: Waiting for Pod e2e-tests-statefulset-6gbnd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 20:53:46.515: INFO: Waiting for StatefulSet e2e-tests-statefulset-6gbnd/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 20:53:56.483: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6gbnd
Jan  2 20:53:56.505: INFO: Scaling statefulset ss2 to 0
Jan  2 20:54:26.592: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 20:54:26.600: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:54:26.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-6gbnd" for this suite.
Jan  2 20:54:34.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:54:34.800: INFO: namespace: e2e-tests-statefulset-6gbnd, resource: bindings, ignored listing per whitelist
Jan  2 20:54:34.800: INFO: namespace e2e-tests-statefulset-6gbnd deletion completed in 8.169375889s

• [SLOW TEST:252.712 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:54:34.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 20:54:34.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan  2 20:54:35.059: INFO: stderr: ""
Jan  2 20:54:35.059: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:54:35.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fjdv4" for this suite.
Jan  2 20:54:41.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:54:41.228: INFO: namespace: e2e-tests-kubectl-fjdv4, resource: bindings, ignored listing per whitelist
Jan  2 20:54:41.297: INFO: namespace e2e-tests-kubectl-fjdv4 deletion completed in 6.223566801s

• [SLOW TEST:6.496 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:54:41.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  2 20:54:41.475: INFO: Waiting up to 5m0s for pod "pod-18eaa50b-2da2-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-m4d96" to be "success or failure"
Jan  2 20:54:41.500: INFO: Pod "pod-18eaa50b-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.575858ms
Jan  2 20:54:43.548: INFO: Pod "pod-18eaa50b-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07237481s
Jan  2 20:54:45.577: INFO: Pod "pod-18eaa50b-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10210959s
Jan  2 20:54:48.193: INFO: Pod "pod-18eaa50b-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.718197243s
Jan  2 20:54:50.207: INFO: Pod "pod-18eaa50b-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.731742017s
Jan  2 20:54:52.240: INFO: Pod "pod-18eaa50b-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.765335329s
Jan  2 20:54:54.447: INFO: Pod "pod-18eaa50b-2da2-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.972260967s
STEP: Saw pod success
Jan  2 20:54:54.449: INFO: Pod "pod-18eaa50b-2da2-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:54:54.860: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-18eaa50b-2da2-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 20:54:55.074: INFO: Waiting for pod pod-18eaa50b-2da2-11ea-814c-0242ac110005 to disappear
Jan  2 20:54:55.083: INFO: Pod pod-18eaa50b-2da2-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:54:55.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-m4d96" for this suite.
Jan  2 20:55:01.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:55:01.430: INFO: namespace: e2e-tests-emptydir-m4d96, resource: bindings, ignored listing per whitelist
Jan  2 20:55:01.497: INFO: namespace e2e-tests-emptydir-m4d96 deletion completed in 6.403777976s

• [SLOW TEST:20.199 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:55:01.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  2 20:55:01.825: INFO: Number of nodes with available pods: 0
Jan  2 20:55:01.825: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:02.854: INFO: Number of nodes with available pods: 0
Jan  2 20:55:02.854: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:04.026: INFO: Number of nodes with available pods: 0
Jan  2 20:55:04.026: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:04.851: INFO: Number of nodes with available pods: 0
Jan  2 20:55:04.851: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:06.002: INFO: Number of nodes with available pods: 0
Jan  2 20:55:06.003: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:09.236: INFO: Number of nodes with available pods: 0
Jan  2 20:55:09.236: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:09.849: INFO: Number of nodes with available pods: 0
Jan  2 20:55:09.849: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:10.849: INFO: Number of nodes with available pods: 0
Jan  2 20:55:10.849: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:11.855: INFO: Number of nodes with available pods: 0
Jan  2 20:55:11.855: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:12.848: INFO: Number of nodes with available pods: 1
Jan  2 20:55:12.848: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  2 20:55:13.029: INFO: Number of nodes with available pods: 0
Jan  2 20:55:13.029: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:14.067: INFO: Number of nodes with available pods: 0
Jan  2 20:55:14.067: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:15.160: INFO: Number of nodes with available pods: 0
Jan  2 20:55:15.160: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:16.485: INFO: Number of nodes with available pods: 0
Jan  2 20:55:16.485: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:17.960: INFO: Number of nodes with available pods: 0
Jan  2 20:55:17.960: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:18.226: INFO: Number of nodes with available pods: 0
Jan  2 20:55:18.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:19.101: INFO: Number of nodes with available pods: 0
Jan  2 20:55:19.101: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:20.127: INFO: Number of nodes with available pods: 0
Jan  2 20:55:20.127: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:21.792: INFO: Number of nodes with available pods: 0
Jan  2 20:55:21.793: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:22.177: INFO: Number of nodes with available pods: 0
Jan  2 20:55:22.177: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:23.052: INFO: Number of nodes with available pods: 0
Jan  2 20:55:23.052: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:24.259: INFO: Number of nodes with available pods: 0
Jan  2 20:55:24.259: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:25.044: INFO: Number of nodes with available pods: 0
Jan  2 20:55:25.044: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:26.074: INFO: Number of nodes with available pods: 0
Jan  2 20:55:26.074: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 20:55:27.055: INFO: Number of nodes with available pods: 1
Jan  2 20:55:27.055: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4549g, will wait for the garbage collector to delete the pods
Jan  2 20:55:27.163: INFO: Deleting DaemonSet.extensions daemon-set took: 46.474235ms
Jan  2 20:55:27.264: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.559487ms
Jan  2 20:55:42.969: INFO: Number of nodes with available pods: 0
Jan  2 20:55:42.969: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 20:55:42.973: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4549g/daemonsets","resourceVersion":"16964442"},"items":null}

Jan  2 20:55:42.975: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4549g/pods","resourceVersion":"16964442"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:55:42.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-4549g" for this suite.
Jan  2 20:55:49.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:55:49.186: INFO: namespace: e2e-tests-daemonsets-4549g, resource: bindings, ignored listing per whitelist
Jan  2 20:55:49.295: INFO: namespace e2e-tests-daemonsets-4549g deletion completed in 6.307576053s

• [SLOW TEST:47.798 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:55:49.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan  2 20:55:50.156: INFO: Waiting up to 5m0s for pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j" in namespace "e2e-tests-svcaccounts-252kg" to be "success or failure"
Jan  2 20:55:50.189: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j": Phase="Pending", Reason="", readiness=false. Elapsed: 33.059653ms
Jan  2 20:55:52.357: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20107582s
Jan  2 20:55:54.372: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216246697s
Jan  2 20:55:57.031: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.875445698s
Jan  2 20:55:59.045: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.88907432s
Jan  2 20:56:01.063: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.9074997s
Jan  2 20:56:03.078: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j": Phase="Pending", Reason="", readiness=false. Elapsed: 12.92239174s
Jan  2 20:56:05.096: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j": Phase="Pending", Reason="", readiness=false. Elapsed: 14.939963223s
Jan  2 20:56:07.295: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j": Phase="Pending", Reason="", readiness=false. Elapsed: 17.139081614s
Jan  2 20:56:09.621: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.465113366s
STEP: Saw pod success
Jan  2 20:56:09.621: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j" satisfied condition "success or failure"
Jan  2 20:56:09.630: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j container token-test: 
STEP: delete the pod
Jan  2 20:56:10.228: INFO: Waiting for pod pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j to disappear
Jan  2 20:56:10.241: INFO: Pod pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-qnt2j no longer exists
STEP: Creating a pod to test consume service account root CA
Jan  2 20:56:10.259: INFO: Waiting up to 5m0s for pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z" in namespace "e2e-tests-svcaccounts-252kg" to be "success or failure"
Jan  2 20:56:10.553: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z": Phase="Pending", Reason="", readiness=false. Elapsed: 293.917951ms
Jan  2 20:56:12.937: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.678060708s
Jan  2 20:56:14.953: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.694854765s
Jan  2 20:56:17.063: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.804410462s
Jan  2 20:56:19.084: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.82578672s
Jan  2 20:56:21.470: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z": Phase="Pending", Reason="", readiness=false. Elapsed: 11.211397369s
Jan  2 20:56:23.485: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z": Phase="Pending", Reason="", readiness=false. Elapsed: 13.226274327s
Jan  2 20:56:25.880: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z": Phase="Pending", Reason="", readiness=false. Elapsed: 15.621781117s
Jan  2 20:56:28.825: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z": Phase="Running", Reason="", readiness=false. Elapsed: 18.565891484s
Jan  2 20:56:31.373: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.114465535s
STEP: Saw pod success
Jan  2 20:56:31.373: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z" satisfied condition "success or failure"
Jan  2 20:56:31.654: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z container root-ca-test: 
STEP: delete the pod
Jan  2 20:56:31.820: INFO: Waiting for pod pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z to disappear
Jan  2 20:56:31.831: INFO: Pod pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-p8m5z no longer exists
STEP: Creating a pod to test consume service account namespace
Jan  2 20:56:31.868: INFO: Waiting up to 5m0s for pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b" in namespace "e2e-tests-svcaccounts-252kg" to be "success or failure"
Jan  2 20:56:31.970: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b": Phase="Pending", Reason="", readiness=false. Elapsed: 102.546952ms
Jan  2 20:56:34.000: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132622951s
Jan  2 20:56:36.121: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.252750203s
Jan  2 20:56:38.230: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.362598122s
Jan  2 20:56:40.260: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.392605147s
Jan  2 20:56:42.434: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.565909727s
Jan  2 20:56:44.451: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.583151495s
Jan  2 20:56:46.473: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.604702171s
Jan  2 20:56:48.519: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b": Phase="Running", Reason="", readiness=false. Elapsed: 16.651403439s
Jan  2 20:56:52.117: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.249148937s
STEP: Saw pod success
Jan  2 20:56:52.117: INFO: Pod "pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b" satisfied condition "success or failure"
Jan  2 20:56:52.131: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b container namespace-test: 
STEP: delete the pod
Jan  2 20:56:52.713: INFO: Waiting for pod pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b to disappear
Jan  2 20:56:52.720: INFO: Pod pod-service-account-41dccfc1-2da2-11ea-814c-0242ac110005-5hx9b no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:56:52.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-252kg" for this suite.
Jan  2 20:57:00.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:57:00.925: INFO: namespace: e2e-tests-svcaccounts-252kg, resource: bindings, ignored listing per whitelist
Jan  2 20:57:01.061: INFO: namespace e2e-tests-svcaccounts-252kg deletion completed in 8.335752124s

• [SLOW TEST:71.766 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:57:01.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 20:57:01.389: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c548b08-2da2-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-ncf7t" to be "success or failure"
Jan  2 20:57:01.463: INFO: Pod "downwardapi-volume-6c548b08-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 74.13291ms
Jan  2 20:57:03.476: INFO: Pod "downwardapi-volume-6c548b08-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086961785s
Jan  2 20:57:05.492: INFO: Pod "downwardapi-volume-6c548b08-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102230763s
Jan  2 20:57:07.612: INFO: Pod "downwardapi-volume-6c548b08-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.222813518s
Jan  2 20:57:09.625: INFO: Pod "downwardapi-volume-6c548b08-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.235302621s
Jan  2 20:57:11.638: INFO: Pod "downwardapi-volume-6c548b08-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.248551717s
Jan  2 20:57:13.912: INFO: Pod "downwardapi-volume-6c548b08-2da2-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.523191428s
STEP: Saw pod success
Jan  2 20:57:13.913: INFO: Pod "downwardapi-volume-6c548b08-2da2-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:57:13.923: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6c548b08-2da2-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 20:57:14.496: INFO: Waiting for pod downwardapi-volume-6c548b08-2da2-11ea-814c-0242ac110005 to disappear
Jan  2 20:57:14.517: INFO: Pod downwardapi-volume-6c548b08-2da2-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:57:14.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ncf7t" for this suite.
Jan  2 20:57:20.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:57:20.757: INFO: namespace: e2e-tests-projected-ncf7t, resource: bindings, ignored listing per whitelist
Jan  2 20:57:20.791: INFO: namespace e2e-tests-projected-ncf7t deletion completed in 6.257428319s

• [SLOW TEST:19.730 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:57:20.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-6zwbw/configmap-test-78080ed0-2da2-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 20:57:21.010: INFO: Waiting up to 5m0s for pod "pod-configmaps-7808b1f1-2da2-11ea-814c-0242ac110005" in namespace "e2e-tests-configmap-6zwbw" to be "success or failure"
Jan  2 20:57:21.023: INFO: Pod "pod-configmaps-7808b1f1-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.246522ms
Jan  2 20:57:23.116: INFO: Pod "pod-configmaps-7808b1f1-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106186042s
Jan  2 20:57:25.128: INFO: Pod "pod-configmaps-7808b1f1-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117780383s
Jan  2 20:57:27.500: INFO: Pod "pod-configmaps-7808b1f1-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.490341387s
Jan  2 20:57:29.516: INFO: Pod "pod-configmaps-7808b1f1-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.505706569s
Jan  2 20:57:31.527: INFO: Pod "pod-configmaps-7808b1f1-2da2-11ea-814c-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.517042893s
Jan  2 20:57:33.545: INFO: Pod "pod-configmaps-7808b1f1-2da2-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.535386305s
STEP: Saw pod success
Jan  2 20:57:33.545: INFO: Pod "pod-configmaps-7808b1f1-2da2-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:57:33.549: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7808b1f1-2da2-11ea-814c-0242ac110005 container env-test: 
STEP: delete the pod
Jan  2 20:57:34.356: INFO: Waiting for pod pod-configmaps-7808b1f1-2da2-11ea-814c-0242ac110005 to disappear
Jan  2 20:57:34.701: INFO: Pod pod-configmaps-7808b1f1-2da2-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:57:34.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6zwbw" for this suite.
Jan  2 20:57:40.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:57:40.919: INFO: namespace: e2e-tests-configmap-6zwbw, resource: bindings, ignored listing per whitelist
Jan  2 20:57:41.155: INFO: namespace e2e-tests-configmap-6zwbw deletion completed in 6.406639365s

• [SLOW TEST:20.363 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:57:41.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 20:57:41.499: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"842c033a-2da2-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00256fb42), BlockOwnerDeletion:(*bool)(0xc00256fb43)}}
Jan  2 20:57:41.649: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"84284dce-2da2-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0023cb292), BlockOwnerDeletion:(*bool)(0xc0023cb293)}}
Jan  2 20:57:41.698: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"84298bd0-2da2-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0023cb3a2), BlockOwnerDeletion:(*bool)(0xc0023cb3a3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:57:46.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-86nph" for this suite.
Jan  2 20:57:53.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:57:53.115: INFO: namespace: e2e-tests-gc-86nph, resource: bindings, ignored listing per whitelist
Jan  2 20:57:53.276: INFO: namespace e2e-tests-gc-86nph deletion completed in 6.485259086s

• [SLOW TEST:12.121 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:57:53.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  2 20:58:03.007: INFO: 10 pods remaining
Jan  2 20:58:03.007: INFO: 8 pods has nil DeletionTimestamp
Jan  2 20:58:03.007: INFO: 
Jan  2 20:58:04.231: INFO: 8 pods remaining
Jan  2 20:58:04.231: INFO: 0 pods has nil DeletionTimestamp
Jan  2 20:58:04.231: INFO: 
Jan  2 20:58:05.031: INFO: 0 pods remaining
Jan  2 20:58:05.031: INFO: 0 pods has nil DeletionTimestamp
Jan  2 20:58:05.031: INFO: 
STEP: Gathering metrics
W0102 20:58:05.914172       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 20:58:05.914: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:58:05.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-pn92m" for this suite.
Jan  2 20:58:22.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:58:22.293: INFO: namespace: e2e-tests-gc-pn92m, resource: bindings, ignored listing per whitelist
Jan  2 20:58:22.385: INFO: namespace e2e-tests-gc-pn92m deletion completed in 16.466386974s

• [SLOW TEST:29.109 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:58:22.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-d9nrw
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 20:58:22.798: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 20:59:05.151: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-d9nrw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 20:59:05.151: INFO: >>> kubeConfig: /root/.kube/config
I0102 20:59:05.246299       8 log.go:172] (0xc0011c2420) (0xc0026be140) Create stream
I0102 20:59:05.246448       8 log.go:172] (0xc0011c2420) (0xc0026be140) Stream added, broadcasting: 1
I0102 20:59:05.252282       8 log.go:172] (0xc0011c2420) Reply frame received for 1
I0102 20:59:05.252329       8 log.go:172] (0xc0011c2420) (0xc0026be1e0) Create stream
I0102 20:59:05.252339       8 log.go:172] (0xc0011c2420) (0xc0026be1e0) Stream added, broadcasting: 3
I0102 20:59:05.253899       8 log.go:172] (0xc0011c2420) Reply frame received for 3
I0102 20:59:05.253937       8 log.go:172] (0xc0011c2420) (0xc002610000) Create stream
I0102 20:59:05.253953       8 log.go:172] (0xc0011c2420) (0xc002610000) Stream added, broadcasting: 5
I0102 20:59:05.255775       8 log.go:172] (0xc0011c2420) Reply frame received for 5
I0102 20:59:06.462539       8 log.go:172] (0xc0011c2420) Data frame received for 3
I0102 20:59:06.462678       8 log.go:172] (0xc0026be1e0) (3) Data frame handling
I0102 20:59:06.462707       8 log.go:172] (0xc0026be1e0) (3) Data frame sent
I0102 20:59:06.720864       8 log.go:172] (0xc0011c2420) (0xc002610000) Stream removed, broadcasting: 5
I0102 20:59:06.721160       8 log.go:172] (0xc0011c2420) Data frame received for 1
I0102 20:59:06.721218       8 log.go:172] (0xc0011c2420) (0xc0026be1e0) Stream removed, broadcasting: 3
I0102 20:59:06.721337       8 log.go:172] (0xc0026be140) (1) Data frame handling
I0102 20:59:06.721395       8 log.go:172] (0xc0026be140) (1) Data frame sent
I0102 20:59:06.721444       8 log.go:172] (0xc0011c2420) (0xc0026be140) Stream removed, broadcasting: 1
I0102 20:59:06.721488       8 log.go:172] (0xc0011c2420) Go away received
I0102 20:59:06.722386       8 log.go:172] (0xc0011c2420) (0xc0026be140) Stream removed, broadcasting: 1
I0102 20:59:06.722422       8 log.go:172] (0xc0011c2420) (0xc0026be1e0) Stream removed, broadcasting: 3
I0102 20:59:06.722447       8 log.go:172] (0xc0011c2420) (0xc002610000) Stream removed, broadcasting: 5
Jan  2 20:59:06.722: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:59:06.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-d9nrw" for this suite.
Jan  2 20:59:30.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:59:30.863: INFO: namespace: e2e-tests-pod-network-test-d9nrw, resource: bindings, ignored listing per whitelist
Jan  2 20:59:30.939: INFO: namespace e2e-tests-pod-network-test-d9nrw deletion completed in 24.187323595s

• [SLOW TEST:68.552 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:59:30.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 20:59:31.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5979557-2da2-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-ktfjz" to be "success or failure"
Jan  2 20:59:31.170: INFO: Pod "downwardapi-volume-c5979557-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.346258ms
Jan  2 20:59:33.435: INFO: Pod "downwardapi-volume-c5979557-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.274265154s
Jan  2 20:59:35.461: INFO: Pod "downwardapi-volume-c5979557-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300038703s
Jan  2 20:59:37.680: INFO: Pod "downwardapi-volume-c5979557-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.519003808s
Jan  2 20:59:39.698: INFO: Pod "downwardapi-volume-c5979557-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.536864681s
Jan  2 20:59:41.734: INFO: Pod "downwardapi-volume-c5979557-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.572785622s
Jan  2 20:59:43.784: INFO: Pod "downwardapi-volume-c5979557-2da2-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.622880754s
STEP: Saw pod success
Jan  2 20:59:43.784: INFO: Pod "downwardapi-volume-c5979557-2da2-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 20:59:43.789: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c5979557-2da2-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 20:59:44.859: INFO: Waiting for pod downwardapi-volume-c5979557-2da2-11ea-814c-0242ac110005 to disappear
Jan  2 20:59:45.063: INFO: Pod downwardapi-volume-c5979557-2da2-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 20:59:45.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ktfjz" for this suite.
Jan  2 20:59:53.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 20:59:53.402: INFO: namespace: e2e-tests-downward-api-ktfjz, resource: bindings, ignored listing per whitelist
Jan  2 20:59:53.420: INFO: namespace e2e-tests-downward-api-ktfjz deletion completed in 8.335100624s

• [SLOW TEST:22.481 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 20:59:53.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-d2f8fe55-2da2-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 20:59:53.608: INFO: Waiting up to 5m0s for pod "pod-configmaps-d2faeb3a-2da2-11ea-814c-0242ac110005" in namespace "e2e-tests-configmap-c56t5" to be "success or failure"
Jan  2 20:59:53.625: INFO: Pod "pod-configmaps-d2faeb3a-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.691366ms
Jan  2 20:59:56.348: INFO: Pod "pod-configmaps-d2faeb3a-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739470886s
Jan  2 20:59:58.376: INFO: Pod "pod-configmaps-d2faeb3a-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.767100854s
Jan  2 21:00:00.716: INFO: Pod "pod-configmaps-d2faeb3a-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.107549735s
Jan  2 21:00:02.744: INFO: Pod "pod-configmaps-d2faeb3a-2da2-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.135609104s
Jan  2 21:00:04.770: INFO: Pod "pod-configmaps-d2faeb3a-2da2-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.162035309s
STEP: Saw pod success
Jan  2 21:00:04.771: INFO: Pod "pod-configmaps-d2faeb3a-2da2-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:00:04.788: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d2faeb3a-2da2-11ea-814c-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 21:00:04.953: INFO: Waiting for pod pod-configmaps-d2faeb3a-2da2-11ea-814c-0242ac110005 to disappear
Jan  2 21:00:04.980: INFO: Pod pod-configmaps-d2faeb3a-2da2-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:00:04.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-c56t5" for this suite.
Jan  2 21:00:11.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:00:11.575: INFO: namespace: e2e-tests-configmap-c56t5, resource: bindings, ignored listing per whitelist
Jan  2 21:00:11.641: INFO: namespace e2e-tests-configmap-c56t5 deletion completed in 6.492052575s

• [SLOW TEST:18.220 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:00:11.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  2 21:00:22.657: INFO: Successfully updated pod "labelsupdatedde62777-2da2-11ea-814c-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:00:24.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mdj2f" for this suite.
Jan  2 21:00:50.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:00:51.105: INFO: namespace: e2e-tests-projected-mdj2f, resource: bindings, ignored listing per whitelist
Jan  2 21:00:51.123: INFO: namespace e2e-tests-projected-mdj2f deletion completed in 26.221686982s

• [SLOW TEST:39.481 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:00:51.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  2 21:01:13.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:13.808: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:15.809: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:15.841: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:17.808: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:17.832: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:19.808: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:19.833: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:21.808: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:21.838: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:23.808: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:23.870: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:25.808: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:25.837: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:27.808: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:27.840: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:29.808: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:29.831: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:31.808: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:31.832: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:33.808: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:33.831: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:35.808: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:35.835: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:37.809: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:37.822: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:39.809: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:39.830: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:41.808: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:41.833: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 21:01:43.809: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 21:01:43.850: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:01:43.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mznq7" for this suite.
Jan  2 21:02:08.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:02:08.286: INFO: namespace: e2e-tests-container-lifecycle-hook-mznq7, resource: bindings, ignored listing per whitelist
Jan  2 21:02:08.326: INFO: namespace e2e-tests-container-lifecycle-hook-mznq7 deletion completed in 24.298254936s

• [SLOW TEST:77.203 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:02:08.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-2377536e-2da3-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 21:02:08.641: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-237840a4-2da3-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-nph54" to be "success or failure"
Jan  2 21:02:08.655: INFO: Pod "pod-projected-secrets-237840a4-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.691698ms
Jan  2 21:02:10.671: INFO: Pod "pod-projected-secrets-237840a4-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030362053s
Jan  2 21:02:12.694: INFO: Pod "pod-projected-secrets-237840a4-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053401957s
Jan  2 21:02:14.805: INFO: Pod "pod-projected-secrets-237840a4-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164077808s
Jan  2 21:02:16.817: INFO: Pod "pod-projected-secrets-237840a4-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175899399s
Jan  2 21:02:18.855: INFO: Pod "pod-projected-secrets-237840a4-2da3-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.213536921s
STEP: Saw pod success
Jan  2 21:02:18.855: INFO: Pod "pod-projected-secrets-237840a4-2da3-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:02:18.873: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-237840a4-2da3-11ea-814c-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 21:02:19.079: INFO: Waiting for pod pod-projected-secrets-237840a4-2da3-11ea-814c-0242ac110005 to disappear
Jan  2 21:02:19.111: INFO: Pod pod-projected-secrets-237840a4-2da3-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:02:19.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nph54" for this suite.
Jan  2 21:02:25.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:02:25.315: INFO: namespace: e2e-tests-projected-nph54, resource: bindings, ignored listing per whitelist
Jan  2 21:02:25.385: INFO: namespace e2e-tests-projected-nph54 deletion completed in 6.258947767s

• [SLOW TEST:17.058 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:02:25.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan  2 21:02:53.936: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dx4fm PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 21:02:53.936: INFO: >>> kubeConfig: /root/.kube/config
I0102 21:02:54.068319       8 log.go:172] (0xc0011c2580) (0xc002611cc0) Create stream
I0102 21:02:54.068426       8 log.go:172] (0xc0011c2580) (0xc002611cc0) Stream added, broadcasting: 1
I0102 21:02:54.081739       8 log.go:172] (0xc0011c2580) Reply frame received for 1
I0102 21:02:54.081823       8 log.go:172] (0xc0011c2580) (0xc000eecd20) Create stream
I0102 21:02:54.081845       8 log.go:172] (0xc0011c2580) (0xc000eecd20) Stream added, broadcasting: 3
I0102 21:02:54.085845       8 log.go:172] (0xc0011c2580) Reply frame received for 3
I0102 21:02:54.085895       8 log.go:172] (0xc0011c2580) (0xc002611d60) Create stream
I0102 21:02:54.085909       8 log.go:172] (0xc0011c2580) (0xc002611d60) Stream added, broadcasting: 5
I0102 21:02:54.087227       8 log.go:172] (0xc0011c2580) Reply frame received for 5
I0102 21:02:54.275248       8 log.go:172] (0xc0011c2580) Data frame received for 3
I0102 21:02:54.275411       8 log.go:172] (0xc000eecd20) (3) Data frame handling
I0102 21:02:54.275459       8 log.go:172] (0xc000eecd20) (3) Data frame sent
I0102 21:02:54.434018       8 log.go:172] (0xc0011c2580) Data frame received for 1
I0102 21:02:54.434125       8 log.go:172] (0xc0011c2580) (0xc002611d60) Stream removed, broadcasting: 5
I0102 21:02:54.434202       8 log.go:172] (0xc002611cc0) (1) Data frame handling
I0102 21:02:54.434250       8 log.go:172] (0xc002611cc0) (1) Data frame sent
I0102 21:02:54.434307       8 log.go:172] (0xc0011c2580) (0xc000eecd20) Stream removed, broadcasting: 3
I0102 21:02:54.434392       8 log.go:172] (0xc0011c2580) (0xc002611cc0) Stream removed, broadcasting: 1
I0102 21:02:54.434418       8 log.go:172] (0xc0011c2580) Go away received
I0102 21:02:54.434791       8 log.go:172] (0xc0011c2580) (0xc002611cc0) Stream removed, broadcasting: 1
I0102 21:02:54.434891       8 log.go:172] (0xc0011c2580) (0xc000eecd20) Stream removed, broadcasting: 3
I0102 21:02:54.434921       8 log.go:172] (0xc0011c2580) (0xc002611d60) Stream removed, broadcasting: 5
Jan  2 21:02:54.435: INFO: Exec stderr: ""
Jan  2 21:02:54.435: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dx4fm PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 21:02:54.435: INFO: >>> kubeConfig: /root/.kube/config
I0102 21:02:54.585514       8 log.go:172] (0xc0011c28f0) (0xc002611ea0) Create stream
I0102 21:02:54.585818       8 log.go:172] (0xc0011c28f0) (0xc002611ea0) Stream added, broadcasting: 1
I0102 21:02:54.598900       8 log.go:172] (0xc0011c28f0) Reply frame received for 1
I0102 21:02:54.599202       8 log.go:172] (0xc0011c28f0) (0xc001fbef00) Create stream
I0102 21:02:54.599251       8 log.go:172] (0xc0011c28f0) (0xc001fbef00) Stream added, broadcasting: 3
I0102 21:02:54.601029       8 log.go:172] (0xc0011c28f0) Reply frame received for 3
I0102 21:02:54.601114       8 log.go:172] (0xc0011c28f0) (0xc0023e9680) Create stream
I0102 21:02:54.601138       8 log.go:172] (0xc0011c28f0) (0xc0023e9680) Stream added, broadcasting: 5
I0102 21:02:54.603214       8 log.go:172] (0xc0011c28f0) Reply frame received for 5
I0102 21:02:54.798530       8 log.go:172] (0xc0011c28f0) Data frame received for 3
I0102 21:02:54.798644       8 log.go:172] (0xc001fbef00) (3) Data frame handling
I0102 21:02:54.798686       8 log.go:172] (0xc001fbef00) (3) Data frame sent
I0102 21:02:54.947141       8 log.go:172] (0xc0011c28f0) Data frame received for 1
I0102 21:02:54.947280       8 log.go:172] (0xc002611ea0) (1) Data frame handling
I0102 21:02:54.947330       8 log.go:172] (0xc002611ea0) (1) Data frame sent
I0102 21:02:54.947354       8 log.go:172] (0xc0011c28f0) (0xc002611ea0) Stream removed, broadcasting: 1
I0102 21:02:54.948580       8 log.go:172] (0xc0011c28f0) (0xc001fbef00) Stream removed, broadcasting: 3
I0102 21:02:54.948790       8 log.go:172] (0xc0011c28f0) (0xc0023e9680) Stream removed, broadcasting: 5
I0102 21:02:54.948941       8 log.go:172] (0xc0011c28f0) Go away received
I0102 21:02:54.949325       8 log.go:172] (0xc0011c28f0) (0xc002611ea0) Stream removed, broadcasting: 1
I0102 21:02:54.949358       8 log.go:172] (0xc0011c28f0) (0xc001fbef00) Stream removed, broadcasting: 3
I0102 21:02:54.949378       8 log.go:172] (0xc0011c28f0) (0xc0023e9680) Stream removed, broadcasting: 5
Jan  2 21:02:54.949: INFO: Exec stderr: ""
Jan  2 21:02:54.949: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dx4fm PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 21:02:54.949: INFO: >>> kubeConfig: /root/.kube/config
I0102 21:02:55.008488       8 log.go:172] (0xc001a6e2c0) (0xc001e88820) Create stream
I0102 21:02:55.008565       8 log.go:172] (0xc001a6e2c0) (0xc001e88820) Stream added, broadcasting: 1
I0102 21:02:55.012591       8 log.go:172] (0xc001a6e2c0) Reply frame received for 1
I0102 21:02:55.012731       8 log.go:172] (0xc001a6e2c0) (0xc0026be000) Create stream
I0102 21:02:55.012755       8 log.go:172] (0xc001a6e2c0) (0xc0026be000) Stream added, broadcasting: 3
I0102 21:02:55.014103       8 log.go:172] (0xc001a6e2c0) Reply frame received for 3
I0102 21:02:55.014140       8 log.go:172] (0xc001a6e2c0) (0xc0023e9720) Create stream
I0102 21:02:55.014164       8 log.go:172] (0xc001a6e2c0) (0xc0023e9720) Stream added, broadcasting: 5
I0102 21:02:55.015216       8 log.go:172] (0xc001a6e2c0) Reply frame received for 5
I0102 21:02:55.107150       8 log.go:172] (0xc001a6e2c0) Data frame received for 3
I0102 21:02:55.107287       8 log.go:172] (0xc0026be000) (3) Data frame handling
I0102 21:02:55.107325       8 log.go:172] (0xc0026be000) (3) Data frame sent
I0102 21:02:55.290428       8 log.go:172] (0xc001a6e2c0) Data frame received for 1
I0102 21:02:55.290581       8 log.go:172] (0xc001e88820) (1) Data frame handling
I0102 21:02:55.290613       8 log.go:172] (0xc001e88820) (1) Data frame sent
I0102 21:02:55.290629       8 log.go:172] (0xc001a6e2c0) (0xc001e88820) Stream removed, broadcasting: 1
I0102 21:02:55.291153       8 log.go:172] (0xc001a6e2c0) (0xc0026be000) Stream removed, broadcasting: 3
I0102 21:02:55.291353       8 log.go:172] (0xc001a6e2c0) (0xc0023e9720) Stream removed, broadcasting: 5
I0102 21:02:55.291462       8 log.go:172] (0xc001a6e2c0) (0xc001e88820) Stream removed, broadcasting: 1
I0102 21:02:55.291482       8 log.go:172] (0xc001a6e2c0) (0xc0026be000) Stream removed, broadcasting: 3
I0102 21:02:55.291497       8 log.go:172] (0xc001a6e2c0) (0xc0023e9720) Stream removed, broadcasting: 5
I0102 21:02:55.291630       8 log.go:172] (0xc001a6e2c0) Go away received
Jan  2 21:02:55.291: INFO: Exec stderr: ""
Jan  2 21:02:55.292: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dx4fm PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 21:02:55.292: INFO: >>> kubeConfig: /root/.kube/config
I0102 21:02:55.409694       8 log.go:172] (0xc0005e3ce0) (0xc0023e9900) Create stream
I0102 21:02:55.409887       8 log.go:172] (0xc0005e3ce0) (0xc0023e9900) Stream added, broadcasting: 1
I0102 21:02:55.418693       8 log.go:172] (0xc0005e3ce0) Reply frame received for 1
I0102 21:02:55.418758       8 log.go:172] (0xc0005e3ce0) (0xc001e88a00) Create stream
I0102 21:02:55.418767       8 log.go:172] (0xc0005e3ce0) (0xc001e88a00) Stream added, broadcasting: 3
I0102 21:02:55.419593       8 log.go:172] (0xc0005e3ce0) Reply frame received for 3
I0102 21:02:55.419612       8 log.go:172] (0xc0005e3ce0) (0xc0026be0a0) Create stream
I0102 21:02:55.419624       8 log.go:172] (0xc0005e3ce0) (0xc0026be0a0) Stream added, broadcasting: 5
I0102 21:02:55.420502       8 log.go:172] (0xc0005e3ce0) Reply frame received for 5
I0102 21:02:55.497330       8 log.go:172] (0xc0005e3ce0) Data frame received for 3
I0102 21:02:55.497469       8 log.go:172] (0xc001e88a00) (3) Data frame handling
I0102 21:02:55.497510       8 log.go:172] (0xc001e88a00) (3) Data frame sent
I0102 21:02:55.589621       8 log.go:172] (0xc0005e3ce0) Data frame received for 1
I0102 21:02:55.591487       8 log.go:172] (0xc0005e3ce0) (0xc001e88a00) Stream removed, broadcasting: 3
I0102 21:02:55.593894       8 log.go:172] (0xc0023e9900) (1) Data frame handling
I0102 21:02:55.594041       8 log.go:172] (0xc0023e9900) (1) Data frame sent
I0102 21:02:55.596040       8 log.go:172] (0xc0005e3ce0) (0xc0026be0a0) Stream removed, broadcasting: 5
I0102 21:02:55.596202       8 log.go:172] (0xc0005e3ce0) (0xc0023e9900) Stream removed, broadcasting: 1
I0102 21:02:55.596333       8 log.go:172] (0xc0005e3ce0) Go away received
I0102 21:02:55.597455       8 log.go:172] (0xc0005e3ce0) (0xc0023e9900) Stream removed, broadcasting: 1
I0102 21:02:55.597545       8 log.go:172] (0xc0005e3ce0) (0xc001e88a00) Stream removed, broadcasting: 3
I0102 21:02:55.597616       8 log.go:172] (0xc0005e3ce0) (0xc0026be0a0) Stream removed, broadcasting: 5
Jan  2 21:02:55.598: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan  2 21:02:55.598: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dx4fm PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 21:02:55.598: INFO: >>> kubeConfig: /root/.kube/config
I0102 21:02:55.699293       8 log.go:172] (0xc0011c2210) (0xc0026100a0) Create stream
I0102 21:02:55.699458       8 log.go:172] (0xc0011c2210) (0xc0026100a0) Stream added, broadcasting: 1
I0102 21:02:55.705417       8 log.go:172] (0xc0011c2210) Reply frame received for 1
I0102 21:02:55.705475       8 log.go:172] (0xc0011c2210) (0xc002610140) Create stream
I0102 21:02:55.705489       8 log.go:172] (0xc0011c2210) (0xc002610140) Stream added, broadcasting: 3
I0102 21:02:55.706678       8 log.go:172] (0xc0011c2210) Reply frame received for 3
I0102 21:02:55.706754       8 log.go:172] (0xc0011c2210) (0xc0015f6000) Create stream
I0102 21:02:55.706791       8 log.go:172] (0xc0011c2210) (0xc0015f6000) Stream added, broadcasting: 5
I0102 21:02:55.708287       8 log.go:172] (0xc0011c2210) Reply frame received for 5
I0102 21:02:55.811675       8 log.go:172] (0xc0011c2210) Data frame received for 3
I0102 21:02:55.811803       8 log.go:172] (0xc002610140) (3) Data frame handling
I0102 21:02:55.811837       8 log.go:172] (0xc002610140) (3) Data frame sent
I0102 21:02:55.942026       8 log.go:172] (0xc0011c2210) Data frame received for 1
I0102 21:02:55.942134       8 log.go:172] (0xc0011c2210) (0xc002610140) Stream removed, broadcasting: 3
I0102 21:02:55.942185       8 log.go:172] (0xc0026100a0) (1) Data frame handling
I0102 21:02:55.942221       8 log.go:172] (0xc0026100a0) (1) Data frame sent
I0102 21:02:55.942387       8 log.go:172] (0xc0011c2210) (0xc0015f6000) Stream removed, broadcasting: 5
I0102 21:02:55.942718       8 log.go:172] (0xc0011c2210) (0xc0026100a0) Stream removed, broadcasting: 1
I0102 21:02:55.942800       8 log.go:172] (0xc0011c2210) Go away received
I0102 21:02:55.943034       8 log.go:172] (0xc0011c2210) (0xc0026100a0) Stream removed, broadcasting: 1
I0102 21:02:55.943045       8 log.go:172] (0xc0011c2210) (0xc002610140) Stream removed, broadcasting: 3
I0102 21:02:55.943055       8 log.go:172] (0xc0011c2210) (0xc0015f6000) Stream removed, broadcasting: 5
Jan  2 21:02:55.943: INFO: Exec stderr: ""
Jan  2 21:02:55.943: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dx4fm PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 21:02:55.943: INFO: >>> kubeConfig: /root/.kube/config
I0102 21:02:56.008170       8 log.go:172] (0xc0005e3b80) (0xc001e281e0) Create stream
I0102 21:02:56.008304       8 log.go:172] (0xc0005e3b80) (0xc001e281e0) Stream added, broadcasting: 1
I0102 21:02:56.012693       8 log.go:172] (0xc0005e3b80) Reply frame received for 1
I0102 21:02:56.012738       8 log.go:172] (0xc0005e3b80) (0xc0015f60a0) Create stream
I0102 21:02:56.012749       8 log.go:172] (0xc0005e3b80) (0xc0015f60a0) Stream added, broadcasting: 3
I0102 21:02:56.013794       8 log.go:172] (0xc0005e3b80) Reply frame received for 3
I0102 21:02:56.013814       8 log.go:172] (0xc0005e3b80) (0xc001e28280) Create stream
I0102 21:02:56.013823       8 log.go:172] (0xc0005e3b80) (0xc001e28280) Stream added, broadcasting: 5
I0102 21:02:56.014657       8 log.go:172] (0xc0005e3b80) Reply frame received for 5
I0102 21:02:56.104866       8 log.go:172] (0xc0005e3b80) Data frame received for 3
I0102 21:02:56.104985       8 log.go:172] (0xc0015f60a0) (3) Data frame handling
I0102 21:02:56.105033       8 log.go:172] (0xc0015f60a0) (3) Data frame sent
I0102 21:02:56.218311       8 log.go:172] (0xc0005e3b80) (0xc001e28280) Stream removed, broadcasting: 5
I0102 21:02:56.218433       8 log.go:172] (0xc0005e3b80) Data frame received for 1
I0102 21:02:56.218468       8 log.go:172] (0xc0005e3b80) (0xc0015f60a0) Stream removed, broadcasting: 3
I0102 21:02:56.218503       8 log.go:172] (0xc001e281e0) (1) Data frame handling
I0102 21:02:56.218518       8 log.go:172] (0xc001e281e0) (1) Data frame sent
I0102 21:02:56.218531       8 log.go:172] (0xc0005e3b80) (0xc001e281e0) Stream removed, broadcasting: 1
I0102 21:02:56.218702       8 log.go:172] (0xc0005e3b80) (0xc001e281e0) Stream removed, broadcasting: 1
I0102 21:02:56.218716       8 log.go:172] (0xc0005e3b80) (0xc0015f60a0) Stream removed, broadcasting: 3
I0102 21:02:56.218724       8 log.go:172] (0xc0005e3b80) (0xc001e28280) Stream removed, broadcasting: 5
Jan  2 21:02:56.219: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan  2 21:02:56.219: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dx4fm PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 21:02:56.219: INFO: >>> kubeConfig: /root/.kube/config
I0102 21:02:56.286465       8 log.go:172] (0xc00202c420) (0xc001e283c0) Create stream
I0102 21:02:56.286657       8 log.go:172] (0xc00202c420) (0xc001e283c0) Stream added, broadcasting: 1
I0102 21:02:56.295976       8 log.go:172] (0xc00202c420) Reply frame received for 1
I0102 21:02:56.296060       8 log.go:172] (0xc00202c420) (0xc0015be0a0) Create stream
I0102 21:02:56.296082       8 log.go:172] (0xc00202c420) (0xc0015be0a0) Stream added, broadcasting: 3
I0102 21:02:56.306367       8 log.go:172] (0xc00202c420) Reply frame received for 3
I0102 21:02:56.306416       8 log.go:172] (0xc00202c420) (0xc002610640) Create stream
I0102 21:02:56.306442       8 log.go:172] (0xc00202c420) (0xc002610640) Stream added, broadcasting: 5
I0102 21:02:56.307627       8 log.go:172] (0xc00202c420) Reply frame received for 5
I0102 21:02:56.539655       8 log.go:172] (0xc00202c420) Data frame received for 3
I0102 21:02:56.539773       8 log.go:172] (0xc0015be0a0) (3) Data frame handling
I0102 21:02:56.539818       8 log.go:172] (0xc0015be0a0) (3) Data frame sent
I0102 21:02:56.642203       8 log.go:172] (0xc00202c420) (0xc0015be0a0) Stream removed, broadcasting: 3
I0102 21:02:56.642362       8 log.go:172] (0xc00202c420) Data frame received for 1
I0102 21:02:56.642406       8 log.go:172] (0xc001e283c0) (1) Data frame handling
I0102 21:02:56.642441       8 log.go:172] (0xc00202c420) (0xc002610640) Stream removed, broadcasting: 5
I0102 21:02:56.642491       8 log.go:172] (0xc001e283c0) (1) Data frame sent
I0102 21:02:56.642508       8 log.go:172] (0xc00202c420) (0xc001e283c0) Stream removed, broadcasting: 1
I0102 21:02:56.642527       8 log.go:172] (0xc00202c420) Go away received
I0102 21:02:56.642807       8 log.go:172] (0xc00202c420) (0xc001e283c0) Stream removed, broadcasting: 1
I0102 21:02:56.642837       8 log.go:172] (0xc00202c420) (0xc0015be0a0) Stream removed, broadcasting: 3
I0102 21:02:56.642851       8 log.go:172] (0xc00202c420) (0xc002610640) Stream removed, broadcasting: 5
Jan  2 21:02:56.642: INFO: Exec stderr: ""
Jan  2 21:02:56.643: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dx4fm PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 21:02:56.643: INFO: >>> kubeConfig: /root/.kube/config
I0102 21:02:56.724748       8 log.go:172] (0xc0006f16b0) (0xc0015f6500) Create stream
I0102 21:02:56.724904       8 log.go:172] (0xc0006f16b0) (0xc0015f6500) Stream added, broadcasting: 1
I0102 21:02:56.744291       8 log.go:172] (0xc0006f16b0) Reply frame received for 1
I0102 21:02:56.744405       8 log.go:172] (0xc0006f16b0) (0xc001e28460) Create stream
I0102 21:02:56.744420       8 log.go:172] (0xc0006f16b0) (0xc001e28460) Stream added, broadcasting: 3
I0102 21:02:56.746154       8 log.go:172] (0xc0006f16b0) Reply frame received for 3
I0102 21:02:56.746211       8 log.go:172] (0xc0006f16b0) (0xc002610780) Create stream
I0102 21:02:56.746229       8 log.go:172] (0xc0006f16b0) (0xc002610780) Stream added, broadcasting: 5
I0102 21:02:56.747842       8 log.go:172] (0xc0006f16b0) Reply frame received for 5
I0102 21:02:56.916942       8 log.go:172] (0xc0006f16b0) Data frame received for 3
I0102 21:02:56.917071       8 log.go:172] (0xc001e28460) (3) Data frame handling
I0102 21:02:56.917102       8 log.go:172] (0xc001e28460) (3) Data frame sent
I0102 21:02:57.024092       8 log.go:172] (0xc0006f16b0) (0xc001e28460) Stream removed, broadcasting: 3
I0102 21:02:57.024314       8 log.go:172] (0xc0006f16b0) Data frame received for 1
I0102 21:02:57.024351       8 log.go:172] (0xc0015f6500) (1) Data frame handling
I0102 21:02:57.024378       8 log.go:172] (0xc0006f16b0) (0xc002610780) Stream removed, broadcasting: 5
I0102 21:02:57.024399       8 log.go:172] (0xc0015f6500) (1) Data frame sent
I0102 21:02:57.024430       8 log.go:172] (0xc0006f16b0) (0xc0015f6500) Stream removed, broadcasting: 1
I0102 21:02:57.024549       8 log.go:172] (0xc0006f16b0) Go away received
I0102 21:02:57.024796       8 log.go:172] (0xc0006f16b0) (0xc0015f6500) Stream removed, broadcasting: 1
I0102 21:02:57.024813       8 log.go:172] (0xc0006f16b0) (0xc001e28460) Stream removed, broadcasting: 3
I0102 21:02:57.024822       8 log.go:172] (0xc0006f16b0) (0xc002610780) Stream removed, broadcasting: 5
Jan  2 21:02:57.024: INFO: Exec stderr: ""
Jan  2 21:02:57.024: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dx4fm PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 21:02:57.025: INFO: >>> kubeConfig: /root/.kube/config
I0102 21:02:57.092815       8 log.go:172] (0xc0006f1ce0) (0xc0015f6960) Create stream
I0102 21:02:57.093016       8 log.go:172] (0xc0006f1ce0) (0xc0015f6960) Stream added, broadcasting: 1
I0102 21:02:57.098318       8 log.go:172] (0xc0006f1ce0) Reply frame received for 1
I0102 21:02:57.098363       8 log.go:172] (0xc0006f1ce0) (0xc0014de000) Create stream
I0102 21:02:57.098372       8 log.go:172] (0xc0006f1ce0) (0xc0014de000) Stream added, broadcasting: 3
I0102 21:02:57.099114       8 log.go:172] (0xc0006f1ce0) Reply frame received for 3
I0102 21:02:57.099135       8 log.go:172] (0xc0006f1ce0) (0xc0015f6a00) Create stream
I0102 21:02:57.099143       8 log.go:172] (0xc0006f1ce0) (0xc0015f6a00) Stream added, broadcasting: 5
I0102 21:02:57.100047       8 log.go:172] (0xc0006f1ce0) Reply frame received for 5
I0102 21:02:57.188053       8 log.go:172] (0xc0006f1ce0) Data frame received for 3
I0102 21:02:57.188216       8 log.go:172] (0xc0014de000) (3) Data frame handling
I0102 21:02:57.188246       8 log.go:172] (0xc0014de000) (3) Data frame sent
I0102 21:02:57.306938       8 log.go:172] (0xc0006f1ce0) Data frame received for 1
I0102 21:02:57.307088       8 log.go:172] (0xc0015f6960) (1) Data frame handling
I0102 21:02:57.307119       8 log.go:172] (0xc0015f6960) (1) Data frame sent
I0102 21:02:57.307167       8 log.go:172] (0xc0006f1ce0) (0xc0014de000) Stream removed, broadcasting: 3
I0102 21:02:57.307311       8 log.go:172] (0xc0006f1ce0) (0xc0015f6960) Stream removed, broadcasting: 1
I0102 21:02:57.307652       8 log.go:172] (0xc0006f1ce0) (0xc0015f6a00) Stream removed, broadcasting: 5
I0102 21:02:57.307713       8 log.go:172] (0xc0006f1ce0) Go away received
I0102 21:02:57.307836       8 log.go:172] (0xc0006f1ce0) (0xc0015f6960) Stream removed, broadcasting: 1
I0102 21:02:57.307863       8 log.go:172] (0xc0006f1ce0) (0xc0014de000) Stream removed, broadcasting: 3
I0102 21:02:57.307879       8 log.go:172] (0xc0006f1ce0) (0xc0015f6a00) Stream removed, broadcasting: 5
Jan  2 21:02:57.307: INFO: Exec stderr: ""
Jan  2 21:02:57.308: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-dx4fm PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 21:02:57.308: INFO: >>> kubeConfig: /root/.kube/config
I0102 21:02:57.386630       8 log.go:172] (0xc0021b62c0) (0xc0015be3c0) Create stream
I0102 21:02:57.386715       8 log.go:172] (0xc0021b62c0) (0xc0015be3c0) Stream added, broadcasting: 1
I0102 21:02:57.391055       8 log.go:172] (0xc0021b62c0) Reply frame received for 1
I0102 21:02:57.391122       8 log.go:172] (0xc0021b62c0) (0xc0014de0a0) Create stream
I0102 21:02:57.391142       8 log.go:172] (0xc0021b62c0) (0xc0014de0a0) Stream added, broadcasting: 3
I0102 21:02:57.392051       8 log.go:172] (0xc0021b62c0) Reply frame received for 3
I0102 21:02:57.392100       8 log.go:172] (0xc0021b62c0) (0xc0014de1e0) Create stream
I0102 21:02:57.392122       8 log.go:172] (0xc0021b62c0) (0xc0014de1e0) Stream added, broadcasting: 5
I0102 21:02:57.395239       8 log.go:172] (0xc0021b62c0) Reply frame received for 5
I0102 21:02:57.529160       8 log.go:172] (0xc0021b62c0) Data frame received for 3
I0102 21:02:57.529291       8 log.go:172] (0xc0014de0a0) (3) Data frame handling
I0102 21:02:57.529342       8 log.go:172] (0xc0014de0a0) (3) Data frame sent
I0102 21:02:57.667332       8 log.go:172] (0xc0021b62c0) Data frame received for 1
I0102 21:02:57.667407       8 log.go:172] (0xc0015be3c0) (1) Data frame handling
I0102 21:02:57.667456       8 log.go:172] (0xc0015be3c0) (1) Data frame sent
I0102 21:02:57.667492       8 log.go:172] (0xc0021b62c0) (0xc0015be3c0) Stream removed, broadcasting: 1
I0102 21:02:57.667980       8 log.go:172] (0xc0021b62c0) (0xc0014de0a0) Stream removed, broadcasting: 3
I0102 21:02:57.668411       8 log.go:172] (0xc0021b62c0) (0xc0014de1e0) Stream removed, broadcasting: 5
I0102 21:02:57.668478       8 log.go:172] (0xc0021b62c0) (0xc0015be3c0) Stream removed, broadcasting: 1
I0102 21:02:57.668503       8 log.go:172] (0xc0021b62c0) (0xc0014de0a0) Stream removed, broadcasting: 3
I0102 21:02:57.668522       8 log.go:172] (0xc0021b62c0) (0xc0014de1e0) Stream removed, broadcasting: 5
Jan  2 21:02:57.668: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:02:57.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-dx4fm" for this suite.
Jan  2 21:03:53.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:03:53.945: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-dx4fm, resource: bindings, ignored listing per whitelist
Jan  2 21:03:53.967: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-dx4fm deletion completed in 56.288363659s

• [SLOW TEST:88.582 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:03:53.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan  2 21:03:54.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-grb5h'
Jan  2 21:03:56.577: INFO: stderr: ""
Jan  2 21:03:56.577: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan  2 21:03:58.627: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 21:03:58.628: INFO: Found 0 / 1
Jan  2 21:03:59.684: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 21:03:59.684: INFO: Found 0 / 1
Jan  2 21:04:00.597: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 21:04:00.597: INFO: Found 0 / 1
Jan  2 21:04:01.596: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 21:04:01.596: INFO: Found 0 / 1
Jan  2 21:04:02.598: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 21:04:02.598: INFO: Found 0 / 1
Jan  2 21:04:03.716: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 21:04:03.716: INFO: Found 0 / 1
Jan  2 21:04:04.591: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 21:04:04.591: INFO: Found 0 / 1
Jan  2 21:04:05.634: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 21:04:05.634: INFO: Found 0 / 1
Jan  2 21:04:06.612: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 21:04:06.613: INFO: Found 0 / 1
Jan  2 21:04:07.603: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 21:04:07.603: INFO: Found 1 / 1
Jan  2 21:04:07.603: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  2 21:04:07.613: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 21:04:07.613: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  2 21:04:07.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kwg6g redis-master --namespace=e2e-tests-kubectl-grb5h'
Jan  2 21:04:07.989: INFO: stderr: ""
Jan  2 21:04:07.989: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Jan 21:04:06.085 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jan 21:04:06.085 # Server started, Redis version 3.2.12\n1:M 02 Jan 21:04:06.085 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jan 21:04:06.085 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  2 21:04:07.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kwg6g redis-master --namespace=e2e-tests-kubectl-grb5h --tail=1'
Jan  2 21:04:08.225: INFO: stderr: ""
Jan  2 21:04:08.225: INFO: stdout: "1:M 02 Jan 21:04:06.085 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  2 21:04:08.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kwg6g redis-master --namespace=e2e-tests-kubectl-grb5h --limit-bytes=1'
Jan  2 21:04:08.377: INFO: stderr: ""
Jan  2 21:04:08.377: INFO: stdout: " "
STEP: exposing timestamps
Jan  2 21:04:08.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kwg6g redis-master --namespace=e2e-tests-kubectl-grb5h --tail=1 --timestamps'
Jan  2 21:04:08.553: INFO: stderr: ""
Jan  2 21:04:08.553: INFO: stdout: "2020-01-02T21:04:06.085952077Z 1:M 02 Jan 21:04:06.085 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  2 21:04:11.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kwg6g redis-master --namespace=e2e-tests-kubectl-grb5h --since=1s'
Jan  2 21:04:11.265: INFO: stderr: ""
Jan  2 21:04:11.265: INFO: stdout: ""
Jan  2 21:04:11.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kwg6g redis-master --namespace=e2e-tests-kubectl-grb5h --since=24h'
Jan  2 21:04:11.445: INFO: stderr: ""
Jan  2 21:04:11.445: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Jan 21:04:06.085 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jan 21:04:06.085 # Server started, Redis version 3.2.12\n1:M 02 Jan 21:04:06.085 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jan 21:04:06.085 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan  2 21:04:11.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-grb5h'
Jan  2 21:04:11.605: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 21:04:11.605: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  2 21:04:11.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-grb5h'
Jan  2 21:04:11.947: INFO: stderr: "No resources found.\n"
Jan  2 21:04:11.947: INFO: stdout: ""
Jan  2 21:04:11.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-grb5h -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 21:04:12.214: INFO: stderr: ""
Jan  2 21:04:12.214: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:04:12.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-grb5h" for this suite.
Jan  2 21:04:34.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:04:34.593: INFO: namespace: e2e-tests-kubectl-grb5h, resource: bindings, ignored listing per whitelist
Jan  2 21:04:34.654: INFO: namespace e2e-tests-kubectl-grb5h deletion completed in 22.419393685s

• [SLOW TEST:40.687 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:04:34.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-7aa8ca39-2da3-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 21:04:34.920: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7aaa4213-2da3-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-mrgrq" to be "success or failure"
Jan  2 21:04:34.949: INFO: Pod "pod-projected-configmaps-7aaa4213-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.380518ms
Jan  2 21:04:36.972: INFO: Pod "pod-projected-configmaps-7aaa4213-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052077869s
Jan  2 21:04:38.990: INFO: Pod "pod-projected-configmaps-7aaa4213-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070072895s
Jan  2 21:04:41.012: INFO: Pod "pod-projected-configmaps-7aaa4213-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091412429s
Jan  2 21:04:43.026: INFO: Pod "pod-projected-configmaps-7aaa4213-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105527691s
Jan  2 21:04:45.045: INFO: Pod "pod-projected-configmaps-7aaa4213-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.125115828s
Jan  2 21:04:47.058: INFO: Pod "pod-projected-configmaps-7aaa4213-2da3-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.137725006s
STEP: Saw pod success
Jan  2 21:04:47.058: INFO: Pod "pod-projected-configmaps-7aaa4213-2da3-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:04:47.063: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-7aaa4213-2da3-11ea-814c-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 21:04:47.934: INFO: Waiting for pod pod-projected-configmaps-7aaa4213-2da3-11ea-814c-0242ac110005 to disappear
Jan  2 21:04:48.009: INFO: Pod pod-projected-configmaps-7aaa4213-2da3-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:04:48.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mrgrq" for this suite.
Jan  2 21:04:54.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:04:54.241: INFO: namespace: e2e-tests-projected-mrgrq, resource: bindings, ignored listing per whitelist
Jan  2 21:04:54.246: INFO: namespace e2e-tests-projected-mrgrq deletion completed in 6.225669165s

• [SLOW TEST:19.591 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:04:54.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 21:04:54.734: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86738bba-2da3-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-pgp8j" to be "success or failure"
Jan  2 21:04:54.851: INFO: Pod "downwardapi-volume-86738bba-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 116.925575ms
Jan  2 21:04:57.036: INFO: Pod "downwardapi-volume-86738bba-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301551444s
Jan  2 21:04:59.057: INFO: Pod "downwardapi-volume-86738bba-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322472916s
Jan  2 21:05:01.576: INFO: Pod "downwardapi-volume-86738bba-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.842162486s
Jan  2 21:05:04.093: INFO: Pod "downwardapi-volume-86738bba-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.358782508s
Jan  2 21:05:06.110: INFO: Pod "downwardapi-volume-86738bba-2da3-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.376022144s
STEP: Saw pod success
Jan  2 21:05:06.110: INFO: Pod "downwardapi-volume-86738bba-2da3-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:05:06.121: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-86738bba-2da3-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 21:05:06.988: INFO: Waiting for pod downwardapi-volume-86738bba-2da3-11ea-814c-0242ac110005 to disappear
Jan  2 21:05:07.023: INFO: Pod downwardapi-volume-86738bba-2da3-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:05:07.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pgp8j" for this suite.
Jan  2 21:05:13.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:05:13.358: INFO: namespace: e2e-tests-downward-api-pgp8j, resource: bindings, ignored listing per whitelist
Jan  2 21:05:13.372: INFO: namespace e2e-tests-downward-api-pgp8j deletion completed in 6.336001747s

• [SLOW TEST:19.126 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:05:13.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-91b26a18-2da3-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 21:05:13.579: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-91b4654c-2da3-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-ffvtf" to be "success or failure"
Jan  2 21:05:13.595: INFO: Pod "pod-projected-configmaps-91b4654c-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.945309ms
Jan  2 21:05:15.712: INFO: Pod "pod-projected-configmaps-91b4654c-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132925887s
Jan  2 21:05:17.745: INFO: Pod "pod-projected-configmaps-91b4654c-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165401839s
Jan  2 21:05:20.083: INFO: Pod "pod-projected-configmaps-91b4654c-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503602477s
Jan  2 21:05:22.108: INFO: Pod "pod-projected-configmaps-91b4654c-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528770086s
Jan  2 21:05:24.131: INFO: Pod "pod-projected-configmaps-91b4654c-2da3-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.551501977s
STEP: Saw pod success
Jan  2 21:05:24.131: INFO: Pod "pod-projected-configmaps-91b4654c-2da3-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:05:24.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-91b4654c-2da3-11ea-814c-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 21:05:24.301: INFO: Waiting for pod pod-projected-configmaps-91b4654c-2da3-11ea-814c-0242ac110005 to disappear
Jan  2 21:05:24.312: INFO: Pod pod-projected-configmaps-91b4654c-2da3-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:05:24.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ffvtf" for this suite.
Jan  2 21:05:30.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:05:30.433: INFO: namespace: e2e-tests-projected-ffvtf, resource: bindings, ignored listing per whitelist
Jan  2 21:05:30.592: INFO: namespace e2e-tests-projected-ffvtf deletion completed in 6.267489283s

• [SLOW TEST:17.218 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:05:30.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 21:05:30.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan  2 21:05:31.022: INFO: stderr: ""
Jan  2 21:05:31.022: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan  2 21:05:31.031: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:05:31.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8dmnv" for this suite.
Jan  2 21:05:37.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:05:37.195: INFO: namespace: e2e-tests-kubectl-8dmnv, resource: bindings, ignored listing per whitelist
Jan  2 21:05:37.324: INFO: namespace e2e-tests-kubectl-8dmnv deletion completed in 6.253322598s

S [SKIPPING] [6.732 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan  2 21:05:31.031: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:05:37.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-s9dz
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 21:05:37.504: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-s9dz" in namespace "e2e-tests-subpath-jq4hw" to be "success or failure"
Jan  2 21:05:37.513: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.388501ms
Jan  2 21:05:39.787: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.283695735s
Jan  2 21:05:41.817: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313002493s
Jan  2 21:05:44.323: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.819701719s
Jan  2 21:05:46.350: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.845946335s
Jan  2 21:05:48.374: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.870562484s
Jan  2 21:05:50.390: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.885850648s
Jan  2 21:05:52.408: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.904200368s
Jan  2 21:05:54.505: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Pending", Reason="", readiness=false. Elapsed: 17.001627451s
Jan  2 21:05:56.533: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Running", Reason="", readiness=false. Elapsed: 19.029332726s
Jan  2 21:05:58.577: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Running", Reason="", readiness=false. Elapsed: 21.073039239s
Jan  2 21:06:00.591: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Running", Reason="", readiness=false. Elapsed: 23.087658334s
Jan  2 21:06:02.610: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Running", Reason="", readiness=false. Elapsed: 25.106491661s
Jan  2 21:06:04.629: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Running", Reason="", readiness=false. Elapsed: 27.125436374s
Jan  2 21:06:06.675: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Running", Reason="", readiness=false. Elapsed: 29.171795042s
Jan  2 21:06:08.692: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Running", Reason="", readiness=false. Elapsed: 31.187898792s
Jan  2 21:06:10.707: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Running", Reason="", readiness=false. Elapsed: 33.20292241s
Jan  2 21:06:12.773: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Running", Reason="", readiness=false. Elapsed: 35.268951309s
Jan  2 21:06:14.891: INFO: Pod "pod-subpath-test-configmap-s9dz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.387224243s
STEP: Saw pod success
Jan  2 21:06:14.891: INFO: Pod "pod-subpath-test-configmap-s9dz" satisfied condition "success or failure"
Jan  2 21:06:14.929: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-s9dz container test-container-subpath-configmap-s9dz: 
STEP: delete the pod
Jan  2 21:06:15.298: INFO: Waiting for pod pod-subpath-test-configmap-s9dz to disappear
Jan  2 21:06:15.311: INFO: Pod pod-subpath-test-configmap-s9dz no longer exists
STEP: Deleting pod pod-subpath-test-configmap-s9dz
Jan  2 21:06:15.311: INFO: Deleting pod "pod-subpath-test-configmap-s9dz" in namespace "e2e-tests-subpath-jq4hw"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:06:15.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-jq4hw" for this suite.
Jan  2 21:06:21.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:06:21.594: INFO: namespace: e2e-tests-subpath-jq4hw, resource: bindings, ignored listing per whitelist
Jan  2 21:06:21.753: INFO: namespace e2e-tests-subpath-jq4hw deletion completed in 6.418074085s

• [SLOW TEST:44.428 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:06:21.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  2 21:06:22.053: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9wxnr,SelfLink:/api/v1/namespaces/e2e-tests-watch-9wxnr/configmaps/e2e-watch-test-label-changed,UID:ba8334d5-2da3-11ea-a994-fa163e34d433,ResourceVersion:16965893,Generation:0,CreationTimestamp:2020-01-02 21:06:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 21:06:22.053: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9wxnr,SelfLink:/api/v1/namespaces/e2e-tests-watch-9wxnr/configmaps/e2e-watch-test-label-changed,UID:ba8334d5-2da3-11ea-a994-fa163e34d433,ResourceVersion:16965894,Generation:0,CreationTimestamp:2020-01-02 21:06:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  2 21:06:22.054: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9wxnr,SelfLink:/api/v1/namespaces/e2e-tests-watch-9wxnr/configmaps/e2e-watch-test-label-changed,UID:ba8334d5-2da3-11ea-a994-fa163e34d433,ResourceVersion:16965895,Generation:0,CreationTimestamp:2020-01-02 21:06:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  2 21:06:32.254: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9wxnr,SelfLink:/api/v1/namespaces/e2e-tests-watch-9wxnr/configmaps/e2e-watch-test-label-changed,UID:ba8334d5-2da3-11ea-a994-fa163e34d433,ResourceVersion:16965909,Generation:0,CreationTimestamp:2020-01-02 21:06:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 21:06:32.254: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9wxnr,SelfLink:/api/v1/namespaces/e2e-tests-watch-9wxnr/configmaps/e2e-watch-test-label-changed,UID:ba8334d5-2da3-11ea-a994-fa163e34d433,ResourceVersion:16965910,Generation:0,CreationTimestamp:2020-01-02 21:06:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  2 21:06:32.254: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9wxnr,SelfLink:/api/v1/namespaces/e2e-tests-watch-9wxnr/configmaps/e2e-watch-test-label-changed,UID:ba8334d5-2da3-11ea-a994-fa163e34d433,ResourceVersion:16965911,Generation:0,CreationTimestamp:2020-01-02 21:06:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:06:32.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-9wxnr" for this suite.
Jan  2 21:06:38.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:06:38.369: INFO: namespace: e2e-tests-watch-9wxnr, resource: bindings, ignored listing per whitelist
Jan  2 21:06:38.829: INFO: namespace e2e-tests-watch-9wxnr deletion completed in 6.549925966s

• [SLOW TEST:17.075 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:06:38.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-c4baff15-2da3-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 21:06:39.236: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4bdd7d7-2da3-11ea-814c-0242ac110005" in namespace "e2e-tests-configmap-5lzgz" to be "success or failure"
Jan  2 21:06:39.345: INFO: Pod "pod-configmaps-c4bdd7d7-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 109.516136ms
Jan  2 21:06:41.373: INFO: Pod "pod-configmaps-c4bdd7d7-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137125486s
Jan  2 21:06:43.384: INFO: Pod "pod-configmaps-c4bdd7d7-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148441097s
Jan  2 21:06:45.688: INFO: Pod "pod-configmaps-c4bdd7d7-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.45229139s
Jan  2 21:06:47.712: INFO: Pod "pod-configmaps-c4bdd7d7-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.475854491s
Jan  2 21:06:49.747: INFO: Pod "pod-configmaps-c4bdd7d7-2da3-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.510916106s
STEP: Saw pod success
Jan  2 21:06:49.747: INFO: Pod "pod-configmaps-c4bdd7d7-2da3-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:06:49.763: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c4bdd7d7-2da3-11ea-814c-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 21:06:51.108: INFO: Waiting for pod pod-configmaps-c4bdd7d7-2da3-11ea-814c-0242ac110005 to disappear
Jan  2 21:06:51.118: INFO: Pod pod-configmaps-c4bdd7d7-2da3-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:06:51.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-5lzgz" for this suite.
Jan  2 21:06:57.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:06:57.463: INFO: namespace: e2e-tests-configmap-5lzgz, resource: bindings, ignored listing per whitelist
Jan  2 21:06:57.525: INFO: namespace e2e-tests-configmap-5lzgz deletion completed in 6.392761897s

• [SLOW TEST:18.695 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:06:57.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  2 21:06:57.635: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  2 21:06:57.704: INFO: Waiting for terminating namespaces to be deleted...
Jan  2 21:06:57.711: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  2 21:06:57.728: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 21:06:57.728: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  2 21:06:57.728: INFO: 	Container weave ready: true, restart count 0
Jan  2 21:06:57.728: INFO: 	Container weave-npc ready: true, restart count 0
Jan  2 21:06:57.728: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 21:06:57.728: INFO: 	Container coredns ready: true, restart count 0
Jan  2 21:06:57.728: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 21:06:57.728: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 21:06:57.728: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 21:06:57.728: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 21:06:57.728: INFO: 	Container coredns ready: true, restart count 0
Jan  2 21:06:57.728: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  2 21:06:57.728: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan  2 21:06:57.900: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  2 21:06:57.901: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  2 21:06:57.901: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan  2 21:06:57.901: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan  2 21:06:57.901: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan  2 21:06:57.901: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan  2 21:06:57.901: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  2 21:06:57.901: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cfe4ae5e-2da3-11ea-814c-0242ac110005.15e62ded9a8770ba], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-7gngn/filler-pod-cfe4ae5e-2da3-11ea-814c-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cfe4ae5e-2da3-11ea-814c-0242ac110005.15e62deeefef7716], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cfe4ae5e-2da3-11ea-814c-0242ac110005.15e62def802cf963], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cfe4ae5e-2da3-11ea-814c-0242ac110005.15e62defac80dc9b], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e62df06d43990c], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:07:11.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-7gngn" for this suite.
Jan  2 21:07:19.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:07:19.445: INFO: namespace: e2e-tests-sched-pred-7gngn, resource: bindings, ignored listing per whitelist
Jan  2 21:07:19.564: INFO: namespace e2e-tests-sched-pred-7gngn deletion completed in 8.252769699s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:22.039 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:07:19.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 21:07:19.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-nhpvc'
Jan  2 21:07:21.080: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 21:07:21.080: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  2 21:07:23.303: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-f6scn]
Jan  2 21:07:23.303: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-f6scn" in namespace "e2e-tests-kubectl-nhpvc" to be "running and ready"
Jan  2 21:07:23.314: INFO: Pod "e2e-test-nginx-rc-f6scn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.156606ms
Jan  2 21:07:25.323: INFO: Pod "e2e-test-nginx-rc-f6scn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019493404s
Jan  2 21:07:27.448: INFO: Pod "e2e-test-nginx-rc-f6scn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144076767s
Jan  2 21:07:29.457: INFO: Pod "e2e-test-nginx-rc-f6scn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154033314s
Jan  2 21:07:31.474: INFO: Pod "e2e-test-nginx-rc-f6scn": Phase="Running", Reason="", readiness=true. Elapsed: 8.170818947s
Jan  2 21:07:31.474: INFO: Pod "e2e-test-nginx-rc-f6scn" satisfied condition "running and ready"
Jan  2 21:07:31.475: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-f6scn]
Jan  2 21:07:31.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-nhpvc'
Jan  2 21:07:31.762: INFO: stderr: ""
Jan  2 21:07:31.762: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan  2 21:07:31.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-nhpvc'
Jan  2 21:07:31.948: INFO: stderr: ""
Jan  2 21:07:31.948: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:07:31.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nhpvc" for this suite.
Jan  2 21:07:56.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:07:56.151: INFO: namespace: e2e-tests-kubectl-nhpvc, resource: bindings, ignored listing per whitelist
Jan  2 21:07:56.270: INFO: namespace e2e-tests-kubectl-nhpvc deletion completed in 24.307360282s

• [SLOW TEST:36.705 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:07:56.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  2 21:07:56.660: INFO: Waiting up to 5m0s for pod "downward-api-f2e82202-2da3-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-tx7cp" to be "success or failure"
Jan  2 21:07:56.668: INFO: Pod "downward-api-f2e82202-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.814513ms
Jan  2 21:07:58.703: INFO: Pod "downward-api-f2e82202-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042435117s
Jan  2 21:08:00.714: INFO: Pod "downward-api-f2e82202-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053188578s
Jan  2 21:08:02.942: INFO: Pod "downward-api-f2e82202-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.281896948s
Jan  2 21:08:05.426: INFO: Pod "downward-api-f2e82202-2da3-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.765928533s
Jan  2 21:08:07.437: INFO: Pod "downward-api-f2e82202-2da3-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.776335942s
STEP: Saw pod success
Jan  2 21:08:07.437: INFO: Pod "downward-api-f2e82202-2da3-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:08:07.445: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f2e82202-2da3-11ea-814c-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 21:08:07.770: INFO: Waiting for pod downward-api-f2e82202-2da3-11ea-814c-0242ac110005 to disappear
Jan  2 21:08:07.787: INFO: Pod downward-api-f2e82202-2da3-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:08:07.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tx7cp" for this suite.
Jan  2 21:08:13.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:08:14.117: INFO: namespace: e2e-tests-downward-api-tx7cp, resource: bindings, ignored listing per whitelist
Jan  2 21:08:14.122: INFO: namespace e2e-tests-downward-api-tx7cp deletion completed in 6.24544986s

• [SLOW TEST:17.852 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:08:14.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dhbxk
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 21:08:14.378: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 21:08:50.937: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-dhbxk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 21:08:50.937: INFO: >>> kubeConfig: /root/.kube/config
I0102 21:08:51.042404       8 log.go:172] (0xc0028e82c0) (0xc001c10820) Create stream
I0102 21:08:51.042466       8 log.go:172] (0xc0028e82c0) (0xc001c10820) Stream added, broadcasting: 1
I0102 21:08:51.048383       8 log.go:172] (0xc0028e82c0) Reply frame received for 1
I0102 21:08:51.048419       8 log.go:172] (0xc0028e82c0) (0xc0026f6000) Create stream
I0102 21:08:51.048428       8 log.go:172] (0xc0028e82c0) (0xc0026f6000) Stream added, broadcasting: 3
I0102 21:08:51.050163       8 log.go:172] (0xc0028e82c0) Reply frame received for 3
I0102 21:08:51.050191       8 log.go:172] (0xc0028e82c0) (0xc001819900) Create stream
I0102 21:08:51.050201       8 log.go:172] (0xc0028e82c0) (0xc001819900) Stream added, broadcasting: 5
I0102 21:08:51.051089       8 log.go:172] (0xc0028e82c0) Reply frame received for 5
I0102 21:08:51.264099       8 log.go:172] (0xc0028e82c0) Data frame received for 3
I0102 21:08:51.264199       8 log.go:172] (0xc0026f6000) (3) Data frame handling
I0102 21:08:51.264229       8 log.go:172] (0xc0026f6000) (3) Data frame sent
I0102 21:08:51.520821       8 log.go:172] (0xc0028e82c0) (0xc0026f6000) Stream removed, broadcasting: 3
I0102 21:08:51.521211       8 log.go:172] (0xc0028e82c0) Data frame received for 1
I0102 21:08:51.521290       8 log.go:172] (0xc001c10820) (1) Data frame handling
I0102 21:08:51.521358       8 log.go:172] (0xc0028e82c0) (0xc001819900) Stream removed, broadcasting: 5
I0102 21:08:51.521539       8 log.go:172] (0xc001c10820) (1) Data frame sent
I0102 21:08:51.521670       8 log.go:172] (0xc0028e82c0) (0xc001c10820) Stream removed, broadcasting: 1
I0102 21:08:51.521766       8 log.go:172] (0xc0028e82c0) Go away received
I0102 21:08:51.522211       8 log.go:172] (0xc0028e82c0) (0xc001c10820) Stream removed, broadcasting: 1
I0102 21:08:51.522251       8 log.go:172] (0xc0028e82c0) (0xc0026f6000) Stream removed, broadcasting: 3
I0102 21:08:51.522279       8 log.go:172] (0xc0028e82c0) (0xc001819900) Stream removed, broadcasting: 5
Jan  2 21:08:51.522: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:08:51.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-dhbxk" for this suite.
Jan  2 21:09:17.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:09:17.795: INFO: namespace: e2e-tests-pod-network-test-dhbxk, resource: bindings, ignored listing per whitelist
Jan  2 21:09:17.891: INFO: namespace e2e-tests-pod-network-test-dhbxk deletion completed in 26.35047734s

• [SLOW TEST:63.768 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:09:17.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  2 21:09:18.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:18.775: INFO: stderr: ""
Jan  2 21:09:18.776: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 21:09:18.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:19.080: INFO: stderr: ""
Jan  2 21:09:19.080: INFO: stdout: "update-demo-nautilus-pcpc6 "
STEP: Replicas for name=update-demo: expected=2 actual=1
Jan  2 21:09:24.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:24.326: INFO: stderr: ""
Jan  2 21:09:24.326: INFO: stdout: "update-demo-nautilus-pcpc6 update-demo-nautilus-tq5kb "
Jan  2 21:09:24.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcpc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:24.562: INFO: stderr: ""
Jan  2 21:09:24.563: INFO: stdout: ""
Jan  2 21:09:24.563: INFO: update-demo-nautilus-pcpc6 is created but not running
Jan  2 21:09:29.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:30.312: INFO: stderr: ""
Jan  2 21:09:30.312: INFO: stdout: "update-demo-nautilus-pcpc6 update-demo-nautilus-tq5kb "
Jan  2 21:09:30.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcpc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:30.484: INFO: stderr: ""
Jan  2 21:09:30.484: INFO: stdout: ""
Jan  2 21:09:30.484: INFO: update-demo-nautilus-pcpc6 is created but not running
Jan  2 21:09:35.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:35.669: INFO: stderr: ""
Jan  2 21:09:35.669: INFO: stdout: "update-demo-nautilus-pcpc6 update-demo-nautilus-tq5kb "
Jan  2 21:09:35.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcpc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:35.908: INFO: stderr: ""
Jan  2 21:09:35.908: INFO: stdout: "true"
Jan  2 21:09:35.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcpc6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:36.077: INFO: stderr: ""
Jan  2 21:09:36.077: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 21:09:36.077: INFO: validating pod update-demo-nautilus-pcpc6
Jan  2 21:09:36.132: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 21:09:36.132: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 21:09:36.132: INFO: update-demo-nautilus-pcpc6 is verified up and running
Jan  2 21:09:36.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tq5kb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:36.271: INFO: stderr: ""
Jan  2 21:09:36.271: INFO: stdout: "true"
Jan  2 21:09:36.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tq5kb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:36.400: INFO: stderr: ""
Jan  2 21:09:36.400: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 21:09:36.400: INFO: validating pod update-demo-nautilus-tq5kb
Jan  2 21:09:36.425: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 21:09:36.426: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 21:09:36.426: INFO: update-demo-nautilus-tq5kb is verified up and running
STEP: scaling down the replication controller
Jan  2 21:09:36.429: INFO: scanned /root for discovery docs: 
Jan  2 21:09:36.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:37.705: INFO: stderr: ""
Jan  2 21:09:37.705: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 21:09:37.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:37.970: INFO: stderr: ""
Jan  2 21:09:37.970: INFO: stdout: "update-demo-nautilus-pcpc6 update-demo-nautilus-tq5kb "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  2 21:09:42.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:43.093: INFO: stderr: ""
Jan  2 21:09:43.094: INFO: stdout: "update-demo-nautilus-pcpc6 "
Jan  2 21:09:43.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcpc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:43.225: INFO: stderr: ""
Jan  2 21:09:43.225: INFO: stdout: "true"
Jan  2 21:09:43.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcpc6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:43.351: INFO: stderr: ""
Jan  2 21:09:43.351: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 21:09:43.351: INFO: validating pod update-demo-nautilus-pcpc6
Jan  2 21:09:43.363: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 21:09:43.363: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 21:09:43.363: INFO: update-demo-nautilus-pcpc6 is verified up and running
STEP: scaling up the replication controller
Jan  2 21:09:43.366: INFO: scanned /root for discovery docs: 
Jan  2 21:09:43.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:45.021: INFO: stderr: ""
Jan  2 21:09:45.021: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 21:09:45.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:45.378: INFO: stderr: ""
Jan  2 21:09:45.378: INFO: stdout: "update-demo-nautilus-djzgq update-demo-nautilus-pcpc6 "
Jan  2 21:09:45.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djzgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:45.758: INFO: stderr: ""
Jan  2 21:09:45.758: INFO: stdout: ""
Jan  2 21:09:45.758: INFO: update-demo-nautilus-djzgq is created but not running
Jan  2 21:09:50.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:50.883: INFO: stderr: ""
Jan  2 21:09:50.883: INFO: stdout: "update-demo-nautilus-djzgq update-demo-nautilus-pcpc6 "
Jan  2 21:09:50.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djzgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:51.028: INFO: stderr: ""
Jan  2 21:09:51.028: INFO: stdout: ""
Jan  2 21:09:51.028: INFO: update-demo-nautilus-djzgq is created but not running
Jan  2 21:09:56.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:56.252: INFO: stderr: ""
Jan  2 21:09:56.252: INFO: stdout: "update-demo-nautilus-djzgq update-demo-nautilus-pcpc6 "
Jan  2 21:09:56.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djzgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:56.424: INFO: stderr: ""
Jan  2 21:09:56.424: INFO: stdout: "true"
Jan  2 21:09:56.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djzgq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:56.631: INFO: stderr: ""
Jan  2 21:09:56.631: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 21:09:56.631: INFO: validating pod update-demo-nautilus-djzgq
Jan  2 21:09:56.641: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 21:09:56.641: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 21:09:56.641: INFO: update-demo-nautilus-djzgq is verified up and running
Jan  2 21:09:56.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcpc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:56.773: INFO: stderr: ""
Jan  2 21:09:56.774: INFO: stdout: "true"
Jan  2 21:09:56.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pcpc6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:56.917: INFO: stderr: ""
Jan  2 21:09:56.917: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 21:09:56.917: INFO: validating pod update-demo-nautilus-pcpc6
Jan  2 21:09:56.927: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 21:09:56.927: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 21:09:56.927: INFO: update-demo-nautilus-pcpc6 is verified up and running
STEP: using delete to clean up resources
Jan  2 21:09:56.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:57.069: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 21:09:57.070: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  2 21:09:57.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-9nx4c'
Jan  2 21:09:57.305: INFO: stderr: "No resources found.\n"
Jan  2 21:09:57.305: INFO: stdout: ""
Jan  2 21:09:57.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-9nx4c -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 21:09:57.485: INFO: stderr: ""
Jan  2 21:09:57.485: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:09:57.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9nx4c" for this suite.
Jan  2 21:10:21.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:10:21.757: INFO: namespace: e2e-tests-kubectl-9nx4c, resource: bindings, ignored listing per whitelist
Jan  2 21:10:21.886: INFO: namespace e2e-tests-kubectl-9nx4c deletion completed in 24.383681238s

• [SLOW TEST:63.993 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:10:21.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan  2 21:10:22.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:10:22.630: INFO: stderr: ""
Jan  2 21:10:22.630: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 21:10:22.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:10:22.989: INFO: stderr: ""
Jan  2 21:10:22.990: INFO: stdout: "update-demo-nautilus-6t55h update-demo-nautilus-mvk4z "
Jan  2 21:10:22.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6t55h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:10:23.202: INFO: stderr: ""
Jan  2 21:10:23.202: INFO: stdout: ""
Jan  2 21:10:23.202: INFO: update-demo-nautilus-6t55h is created but not running
Jan  2 21:10:28.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:10:28.355: INFO: stderr: ""
Jan  2 21:10:28.355: INFO: stdout: "update-demo-nautilus-6t55h update-demo-nautilus-mvk4z "
Jan  2 21:10:28.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6t55h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:10:28.518: INFO: stderr: ""
Jan  2 21:10:28.518: INFO: stdout: ""
Jan  2 21:10:28.518: INFO: update-demo-nautilus-6t55h is created but not running
Jan  2 21:10:33.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:10:33.795: INFO: stderr: ""
Jan  2 21:10:33.795: INFO: stdout: "update-demo-nautilus-6t55h update-demo-nautilus-mvk4z "
Jan  2 21:10:33.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6t55h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:10:34.040: INFO: stderr: ""
Jan  2 21:10:34.040: INFO: stdout: ""
Jan  2 21:10:34.040: INFO: update-demo-nautilus-6t55h is created but not running
Jan  2 21:10:39.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:10:39.236: INFO: stderr: ""
Jan  2 21:10:39.236: INFO: stdout: "update-demo-nautilus-6t55h update-demo-nautilus-mvk4z "
Jan  2 21:10:39.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6t55h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:10:39.400: INFO: stderr: ""
Jan  2 21:10:39.400: INFO: stdout: "true"
Jan  2 21:10:39.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6t55h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:10:39.540: INFO: stderr: ""
Jan  2 21:10:39.540: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 21:10:39.540: INFO: validating pod update-demo-nautilus-6t55h
Jan  2 21:10:39.557: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 21:10:39.557: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 21:10:39.557: INFO: update-demo-nautilus-6t55h is verified up and running
Jan  2 21:10:39.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvk4z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:10:39.687: INFO: stderr: ""
Jan  2 21:10:39.687: INFO: stdout: "true"
Jan  2 21:10:39.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvk4z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:10:39.789: INFO: stderr: ""
Jan  2 21:10:39.789: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 21:10:39.789: INFO: validating pod update-demo-nautilus-mvk4z
Jan  2 21:10:39.801: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 21:10:39.801: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 21:10:39.801: INFO: update-demo-nautilus-mvk4z is verified up and running
STEP: rolling-update to new replication controller
Jan  2 21:10:39.805: INFO: scanned /root for discovery docs: 
Jan  2 21:10:39.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:11:16.975: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  2 21:11:16.975: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 21:11:16.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:11:17.226: INFO: stderr: ""
Jan  2 21:11:17.226: INFO: stdout: "update-demo-kitten-8qd7s update-demo-kitten-f4vzx update-demo-nautilus-6t55h "
STEP: Replicas for name=update-demo: expected=2 actual=3
Jan  2 21:11:22.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:11:22.374: INFO: stderr: ""
Jan  2 21:11:22.374: INFO: stdout: "update-demo-kitten-8qd7s update-demo-kitten-f4vzx update-demo-nautilus-6t55h "
STEP: Replicas for name=update-demo: expected=2 actual=3
Jan  2 21:11:27.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:11:27.535: INFO: stderr: ""
Jan  2 21:11:27.535: INFO: stdout: "update-demo-kitten-8qd7s update-demo-kitten-f4vzx "
Jan  2 21:11:27.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8qd7s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:11:27.686: INFO: stderr: ""
Jan  2 21:11:27.686: INFO: stdout: "true"
Jan  2 21:11:27.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8qd7s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:11:27.832: INFO: stderr: ""
Jan  2 21:11:27.832: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  2 21:11:27.832: INFO: validating pod update-demo-kitten-8qd7s
Jan  2 21:11:27.878: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  2 21:11:27.878: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  2 21:11:27.878: INFO: update-demo-kitten-8qd7s is verified up and running
Jan  2 21:11:27.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f4vzx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:11:28.012: INFO: stderr: ""
Jan  2 21:11:28.012: INFO: stdout: "true"
Jan  2 21:11:28.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f4vzx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zgjz'
Jan  2 21:11:28.174: INFO: stderr: ""
Jan  2 21:11:28.174: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  2 21:11:28.174: INFO: validating pod update-demo-kitten-f4vzx
Jan  2 21:11:28.186: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  2 21:11:28.187: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  2 21:11:28.187: INFO: update-demo-kitten-f4vzx is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:11:28.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6zgjz" for this suite.
Jan  2 21:12:08.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:12:08.273: INFO: namespace: e2e-tests-kubectl-6zgjz, resource: bindings, ignored listing per whitelist
Jan  2 21:12:08.348: INFO: namespace e2e-tests-kubectl-6zgjz deletion completed in 40.153947178s

• [SLOW TEST:106.462 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:12:08.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-qxkpc.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-qxkpc.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-qxkpc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-qxkpc.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-qxkpc.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-qxkpc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  2 21:12:22.724: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-qxkpc/dns-test-8919e716-2da4-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-8919e716-2da4-11ea-814c-0242ac110005)
Jan  2 21:12:22.729: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-qxkpc/dns-test-8919e716-2da4-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-8919e716-2da4-11ea-814c-0242ac110005)
Jan  2 21:12:22.739: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-qxkpc/dns-test-8919e716-2da4-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-8919e716-2da4-11ea-814c-0242ac110005)
Jan  2 21:12:22.748: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-qxkpc/dns-test-8919e716-2da4-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-8919e716-2da4-11ea-814c-0242ac110005)
Jan  2 21:12:22.752: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-qxkpc/dns-test-8919e716-2da4-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-8919e716-2da4-11ea-814c-0242ac110005)
Jan  2 21:12:22.755: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-qxkpc/dns-test-8919e716-2da4-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-8919e716-2da4-11ea-814c-0242ac110005)
Jan  2 21:12:22.760: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-qxkpc.svc.cluster.local from pod e2e-tests-dns-qxkpc/dns-test-8919e716-2da4-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-8919e716-2da4-11ea-814c-0242ac110005)
Jan  2 21:12:22.764: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-qxkpc/dns-test-8919e716-2da4-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-8919e716-2da4-11ea-814c-0242ac110005)
Jan  2 21:12:22.768: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-qxkpc/dns-test-8919e716-2da4-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-8919e716-2da4-11ea-814c-0242ac110005)
Jan  2 21:12:22.771: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-qxkpc/dns-test-8919e716-2da4-11ea-814c-0242ac110005: the server could not find the requested resource (get pods dns-test-8919e716-2da4-11ea-814c-0242ac110005)
Jan  2 21:12:22.807: INFO: Lookups using e2e-tests-dns-qxkpc/dns-test-8919e716-2da4-11ea-814c-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-qxkpc.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord]

Jan  2 21:12:27.968: INFO: DNS probes using e2e-tests-dns-qxkpc/dns-test-8919e716-2da4-11ea-814c-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:12:28.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-qxkpc" for this suite.
Jan  2 21:12:36.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:12:36.219: INFO: namespace: e2e-tests-dns-qxkpc, resource: bindings, ignored listing per whitelist
Jan  2 21:12:36.412: INFO: namespace e2e-tests-dns-qxkpc deletion completed in 8.293437313s

• [SLOW TEST:28.064 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:12:36.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 21:12:36.698: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan  2 21:12:36.750: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bbd5h/daemonsets","resourceVersion":"16966753"},"items":null}

Jan  2 21:12:36.757: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bbd5h/pods","resourceVersion":"16966753"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:12:36.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-bbd5h" for this suite.
Jan  2 21:12:42.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:12:42.929: INFO: namespace: e2e-tests-daemonsets-bbd5h, resource: bindings, ignored listing per whitelist
Jan  2 21:12:42.946: INFO: namespace e2e-tests-daemonsets-bbd5h deletion completed in 6.177999044s

S [SKIPPING] [6.533 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan  2 21:12:36.698: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:12:42.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan  2 21:12:43.221: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:12:43.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k9hqh" for this suite.
Jan  2 21:12:49.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:12:49.600: INFO: namespace: e2e-tests-kubectl-k9hqh, resource: bindings, ignored listing per whitelist
Jan  2 21:12:49.683: INFO: namespace e2e-tests-kubectl-k9hqh deletion completed in 6.237581445s

• [SLOW TEST:6.737 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:12:49.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-a1bd8496-2da4-11ea-814c-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-a1bd8496-2da4-11ea-814c-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:13:02.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dl9gl" for this suite.
Jan  2 21:13:26.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:13:26.296: INFO: namespace: e2e-tests-configmap-dl9gl, resource: bindings, ignored listing per whitelist
Jan  2 21:13:26.447: INFO: namespace e2e-tests-configmap-dl9gl deletion completed in 24.226907732s

• [SLOW TEST:36.763 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:13:26.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan  2 21:13:26.873: INFO: Waiting up to 5m0s for pod "var-expansion-b7b8f920-2da4-11ea-814c-0242ac110005" in namespace "e2e-tests-var-expansion-t4s97" to be "success or failure"
Jan  2 21:13:26.901: INFO: Pod "var-expansion-b7b8f920-2da4-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.266277ms
Jan  2 21:13:28.920: INFO: Pod "var-expansion-b7b8f920-2da4-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046446892s
Jan  2 21:13:30.953: INFO: Pod "var-expansion-b7b8f920-2da4-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079112881s
Jan  2 21:13:33.614: INFO: Pod "var-expansion-b7b8f920-2da4-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.740136384s
Jan  2 21:13:36.755: INFO: Pod "var-expansion-b7b8f920-2da4-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.881676202s
Jan  2 21:13:38.772: INFO: Pod "var-expansion-b7b8f920-2da4-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.89806781s
STEP: Saw pod success
Jan  2 21:13:38.772: INFO: Pod "var-expansion-b7b8f920-2da4-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:13:38.781: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-b7b8f920-2da4-11ea-814c-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 21:13:38.903: INFO: Waiting for pod var-expansion-b7b8f920-2da4-11ea-814c-0242ac110005 to disappear
Jan  2 21:13:38.952: INFO: Pod var-expansion-b7b8f920-2da4-11ea-814c-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:13:38.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-t4s97" for this suite.
Jan  2 21:13:45.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:13:45.175: INFO: namespace: e2e-tests-var-expansion-t4s97, resource: bindings, ignored listing per whitelist
Jan  2 21:13:45.208: INFO: namespace e2e-tests-var-expansion-t4s97 deletion completed in 6.246100078s

• [SLOW TEST:18.761 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:13:45.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 21:13:45.612: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c2e44f3c-2da4-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-2zmg5" to be "success or failure"
Jan  2 21:13:45.627: INFO: Pod "downwardapi-volume-c2e44f3c-2da4-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.184954ms
Jan  2 21:13:47.921: INFO: Pod "downwardapi-volume-c2e44f3c-2da4-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308763483s
Jan  2 21:13:49.947: INFO: Pod "downwardapi-volume-c2e44f3c-2da4-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335396512s
Jan  2 21:13:52.224: INFO: Pod "downwardapi-volume-c2e44f3c-2da4-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.611647921s
Jan  2 21:13:54.270: INFO: Pod "downwardapi-volume-c2e44f3c-2da4-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.657634101s
Jan  2 21:13:56.285: INFO: Pod "downwardapi-volume-c2e44f3c-2da4-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.672699892s
STEP: Saw pod success
Jan  2 21:13:56.285: INFO: Pod "downwardapi-volume-c2e44f3c-2da4-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:13:56.299: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c2e44f3c-2da4-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 21:13:56.568: INFO: Waiting for pod downwardapi-volume-c2e44f3c-2da4-11ea-814c-0242ac110005 to disappear
Jan  2 21:13:56.584: INFO: Pod downwardapi-volume-c2e44f3c-2da4-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:13:56.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2zmg5" for this suite.
Jan  2 21:14:02.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:14:02.858: INFO: namespace: e2e-tests-projected-2zmg5, resource: bindings, ignored listing per whitelist
Jan  2 21:14:02.886: INFO: namespace e2e-tests-projected-2zmg5 deletion completed in 6.285823504s

• [SLOW TEST:17.678 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:14:02.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:15:06.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-lw49z" for this suite.
Jan  2 21:15:14.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:15:14.421: INFO: namespace: e2e-tests-container-runtime-lw49z, resource: bindings, ignored listing per whitelist
Jan  2 21:15:14.796: INFO: namespace e2e-tests-container-runtime-lw49z deletion completed in 8.56543683s

• [SLOW TEST:71.909 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:15:14.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0102 21:15:25.330178       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 21:15:25.330: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:15:25.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-l7zpr" for this suite.
Jan  2 21:15:31.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:15:31.487: INFO: namespace: e2e-tests-gc-l7zpr, resource: bindings, ignored listing per whitelist
Jan  2 21:15:31.597: INFO: namespace e2e-tests-gc-l7zpr deletion completed in 6.259290956s

• [SLOW TEST:16.800 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:15:31.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-6zhnw
Jan  2 21:15:43.887: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-6zhnw
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 21:15:43.898: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:19:44.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-6zhnw" for this suite.
Jan  2 21:19:50.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:19:50.764: INFO: namespace: e2e-tests-container-probe-6zhnw, resource: bindings, ignored listing per whitelist
Jan  2 21:19:50.771: INFO: namespace e2e-tests-container-probe-6zhnw deletion completed in 6.513688487s

• [SLOW TEST:259.174 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:19:50.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan  2 21:20:01.398: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-9cc37810-2da5-11ea-814c-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-cggfc", SelfLink:"/api/v1/namespaces/e2e-tests-pods-cggfc/pods/pod-submit-remove-9cc37810-2da5-11ea-814c-0242ac110005", UID:"9cd0739f-2da5-11ea-a994-fa163e34d433", ResourceVersion:"16967499", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713596791, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"113126551"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gbknd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00190e700), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gbknd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020c3098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001849da0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020c31b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020c3360)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0020c3368), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0020c336c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713596791, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713596800, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713596800, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713596791, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000f97980), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000f97a20), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://bf4a399390c8f7838df2254afa7f204e217d3378659bcd1d7d21c29ba2a2cc1d"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:20:08.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-cggfc" for this suite.
Jan  2 21:20:15.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:20:15.158: INFO: namespace: e2e-tests-pods-cggfc, resource: bindings, ignored listing per whitelist
Jan  2 21:20:15.187: INFO: namespace e2e-tests-pods-cggfc deletion completed in 6.151227588s

• [SLOW TEST:24.415 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:20:15.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 21:20:15.294: INFO: Creating ReplicaSet my-hostname-basic-ab2d44bd-2da5-11ea-814c-0242ac110005
Jan  2 21:20:15.383: INFO: Pod name my-hostname-basic-ab2d44bd-2da5-11ea-814c-0242ac110005: Found 0 pods out of 1
Jan  2 21:20:20.770: INFO: Pod name my-hostname-basic-ab2d44bd-2da5-11ea-814c-0242ac110005: Found 1 pods out of 1
Jan  2 21:20:20.770: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ab2d44bd-2da5-11ea-814c-0242ac110005" is running
Jan  2 21:20:26.793: INFO: Pod "my-hostname-basic-ab2d44bd-2da5-11ea-814c-0242ac110005-zk5fm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 21:20:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 21:20:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ab2d44bd-2da5-11ea-814c-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 21:20:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ab2d44bd-2da5-11ea-814c-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 21:20:15 +0000 UTC Reason: Message:}])
Jan  2 21:20:26.793: INFO: Trying to dial the pod
Jan  2 21:20:31.904: INFO: Controller my-hostname-basic-ab2d44bd-2da5-11ea-814c-0242ac110005: Got expected result from replica 1 [my-hostname-basic-ab2d44bd-2da5-11ea-814c-0242ac110005-zk5fm]: "my-hostname-basic-ab2d44bd-2da5-11ea-814c-0242ac110005-zk5fm", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:20:31.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-rb9k8" for this suite.
Jan  2 21:20:38.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:20:38.161: INFO: namespace: e2e-tests-replicaset-rb9k8, resource: bindings, ignored listing per whitelist
Jan  2 21:20:38.204: INFO: namespace e2e-tests-replicaset-rb9k8 deletion completed in 6.200980426s

• [SLOW TEST:23.017 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:20:38.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  2 21:20:38.494: INFO: Waiting up to 5m0s for pod "pod-b8fb3f51-2da5-11ea-814c-0242ac110005" in namespace "e2e-tests-emptydir-br6sw" to be "success or failure"
Jan  2 21:20:38.626: INFO: Pod "pod-b8fb3f51-2da5-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 131.953931ms
Jan  2 21:20:41.079: INFO: Pod "pod-b8fb3f51-2da5-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.584443314s
Jan  2 21:20:43.096: INFO: Pod "pod-b8fb3f51-2da5-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.601883149s
Jan  2 21:20:45.125: INFO: Pod "pod-b8fb3f51-2da5-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.630628311s
Jan  2 21:20:47.764: INFO: Pod "pod-b8fb3f51-2da5-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.270193757s
Jan  2 21:20:49.833: INFO: Pod "pod-b8fb3f51-2da5-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.33865919s
Jan  2 21:20:52.353: INFO: Pod "pod-b8fb3f51-2da5-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.858494161s
STEP: Saw pod success
Jan  2 21:20:52.353: INFO: Pod "pod-b8fb3f51-2da5-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:20:52.651: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b8fb3f51-2da5-11ea-814c-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 21:20:52.725: INFO: Waiting for pod pod-b8fb3f51-2da5-11ea-814c-0242ac110005 to disappear
Jan  2 21:20:52.732: INFO: Pod pod-b8fb3f51-2da5-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:20:52.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-br6sw" for this suite.
Jan  2 21:20:58.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:20:59.196: INFO: namespace: e2e-tests-emptydir-br6sw, resource: bindings, ignored listing per whitelist
Jan  2 21:20:59.462: INFO: namespace e2e-tests-emptydir-br6sw deletion completed in 6.722067949s

• [SLOW TEST:21.258 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:20:59.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-688d6
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-688d6
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-688d6
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-688d6
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-688d6
Jan  2 21:21:14.125: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-688d6, name: ss-0, uid: ce394134-2da5-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan  2 21:21:14.881: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-688d6, name: ss-0, uid: ce394134-2da5-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan  2 21:21:14.904: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-688d6, name: ss-0, uid: ce394134-2da5-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan  2 21:21:15.071: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-688d6
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-688d6
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-688d6 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 21:21:29.942: INFO: Deleting all statefulset in ns e2e-tests-statefulset-688d6
Jan  2 21:21:29.986: INFO: Scaling statefulset ss to 0
Jan  2 21:21:40.062: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 21:21:40.068: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:21:40.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-688d6" for this suite.
Jan  2 21:21:46.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:21:46.384: INFO: namespace: e2e-tests-statefulset-688d6, resource: bindings, ignored listing per whitelist
Jan  2 21:21:46.411: INFO: namespace e2e-tests-statefulset-688d6 deletion completed in 6.301040453s

• [SLOW TEST:46.949 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:21:46.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:21:46.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-6rv75" for this suite.
Jan  2 21:21:52.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:21:52.754: INFO: namespace: e2e-tests-services-6rv75, resource: bindings, ignored listing per whitelist
Jan  2 21:21:52.801: INFO: namespace e2e-tests-services-6rv75 deletion completed in 6.16642817s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.389 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:21:52.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-e56fa6b0-2da5-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 21:21:53.166: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e573d51c-2da5-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-dhgxp" to be "success or failure"
Jan  2 21:21:53.182: INFO: Pod "pod-projected-configmaps-e573d51c-2da5-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.610715ms
Jan  2 21:21:55.530: INFO: Pod "pod-projected-configmaps-e573d51c-2da5-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.364254884s
Jan  2 21:21:57.563: INFO: Pod "pod-projected-configmaps-e573d51c-2da5-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397365454s
Jan  2 21:21:59.961: INFO: Pod "pod-projected-configmaps-e573d51c-2da5-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.794626419s
Jan  2 21:22:01.978: INFO: Pod "pod-projected-configmaps-e573d51c-2da5-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.812263696s
Jan  2 21:22:03.998: INFO: Pod "pod-projected-configmaps-e573d51c-2da5-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.831668712s
STEP: Saw pod success
Jan  2 21:22:03.998: INFO: Pod "pod-projected-configmaps-e573d51c-2da5-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:22:04.003: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e573d51c-2da5-11ea-814c-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 21:22:05.351: INFO: Waiting for pod pod-projected-configmaps-e573d51c-2da5-11ea-814c-0242ac110005 to disappear
Jan  2 21:22:05.540: INFO: Pod pod-projected-configmaps-e573d51c-2da5-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:22:05.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dhgxp" for this suite.
Jan  2 21:22:11.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:22:11.645: INFO: namespace: e2e-tests-projected-dhgxp, resource: bindings, ignored listing per whitelist
Jan  2 21:22:11.933: INFO: namespace e2e-tests-projected-dhgxp deletion completed in 6.377527868s

• [SLOW TEST:19.132 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:22:11.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0102 21:22:26.624848       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 21:22:26.625: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:22:26.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-hqrvc" for this suite.
Jan  2 21:22:47.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:22:47.508: INFO: namespace: e2e-tests-gc-hqrvc, resource: bindings, ignored listing per whitelist
Jan  2 21:22:50.974: INFO: namespace e2e-tests-gc-hqrvc deletion completed in 24.318137744s

• [SLOW TEST:39.039 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:22:50.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 21:22:52.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-m7rq6'
Jan  2 21:22:55.688: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 21:22:55.688: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan  2 21:22:55.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-m7rq6'
Jan  2 21:22:56.059: INFO: stderr: ""
Jan  2 21:22:56.059: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:22:56.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-m7rq6" for this suite.
Jan  2 21:23:04.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:23:04.226: INFO: namespace: e2e-tests-kubectl-m7rq6, resource: bindings, ignored listing per whitelist
Jan  2 21:23:04.285: INFO: namespace e2e-tests-kubectl-m7rq6 deletion completed in 8.140435819s

• [SLOW TEST:13.310 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:23:04.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  2 21:23:15.141: INFO: Successfully updated pod "labelsupdate0ff85593-2da6-11ea-814c-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:23:19.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7smtr" for this suite.
Jan  2 21:23:41.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:23:41.381: INFO: namespace: e2e-tests-downward-api-7smtr, resource: bindings, ignored listing per whitelist
Jan  2 21:23:41.452: INFO: namespace e2e-tests-downward-api-7smtr deletion completed in 22.190241245s

• [SLOW TEST:37.167 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:23:41.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-26250d10-2da6-11ea-814c-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-26250e2b-2da6-11ea-814c-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-26250d10-2da6-11ea-814c-0242ac110005
STEP: Updating configmap cm-test-opt-upd-26250e2b-2da6-11ea-814c-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-26250e93-2da6-11ea-814c-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:25:11.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-28lrj" for this suite.
Jan  2 21:25:35.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:25:35.736: INFO: namespace: e2e-tests-configmap-28lrj, resource: bindings, ignored listing per whitelist
Jan  2 21:25:35.830: INFO: namespace e2e-tests-configmap-28lrj deletion completed in 24.242319371s

• [SLOW TEST:114.377 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:25:35.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan  2 21:25:36.172: INFO: Waiting up to 5m0s for pod "var-expansion-6a6d4490-2da6-11ea-814c-0242ac110005" in namespace "e2e-tests-var-expansion-bhhqf" to be "success or failure"
Jan  2 21:25:36.185: INFO: Pod "var-expansion-6a6d4490-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.549377ms
Jan  2 21:25:38.534: INFO: Pod "var-expansion-6a6d4490-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.362440896s
Jan  2 21:25:40.586: INFO: Pod "var-expansion-6a6d4490-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.413919408s
Jan  2 21:25:42.892: INFO: Pod "var-expansion-6a6d4490-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.719633665s
Jan  2 21:25:44.990: INFO: Pod "var-expansion-6a6d4490-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.817951415s
Jan  2 21:25:47.004: INFO: Pod "var-expansion-6a6d4490-2da6-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.831624153s
STEP: Saw pod success
Jan  2 21:25:47.004: INFO: Pod "var-expansion-6a6d4490-2da6-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:25:47.008: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-6a6d4490-2da6-11ea-814c-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 21:25:47.071: INFO: Waiting for pod var-expansion-6a6d4490-2da6-11ea-814c-0242ac110005 to disappear
Jan  2 21:25:47.343: INFO: Pod var-expansion-6a6d4490-2da6-11ea-814c-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:25:47.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-bhhqf" for this suite.
Jan  2 21:25:53.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:25:53.625: INFO: namespace: e2e-tests-var-expansion-bhhqf, resource: bindings, ignored listing per whitelist
Jan  2 21:25:53.875: INFO: namespace e2e-tests-var-expansion-bhhqf deletion completed in 6.513786684s

• [SLOW TEST:18.045 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:25:53.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 21:25:54.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75321ac1-2da6-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-gfd7s" to be "success or failure"
Jan  2 21:25:54.403: INFO: Pod "downwardapi-volume-75321ac1-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.58275ms
Jan  2 21:25:56.657: INFO: Pod "downwardapi-volume-75321ac1-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263698875s
Jan  2 21:25:58.707: INFO: Pod "downwardapi-volume-75321ac1-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312832512s
Jan  2 21:26:01.114: INFO: Pod "downwardapi-volume-75321ac1-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.720198898s
Jan  2 21:26:03.136: INFO: Pod "downwardapi-volume-75321ac1-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.741977288s
Jan  2 21:26:05.151: INFO: Pod "downwardapi-volume-75321ac1-2da6-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.757317993s
STEP: Saw pod success
Jan  2 21:26:05.151: INFO: Pod "downwardapi-volume-75321ac1-2da6-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:26:05.158: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-75321ac1-2da6-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 21:26:05.275: INFO: Waiting for pod downwardapi-volume-75321ac1-2da6-11ea-814c-0242ac110005 to disappear
Jan  2 21:26:05.424: INFO: Pod downwardapi-volume-75321ac1-2da6-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:26:05.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gfd7s" for this suite.
Jan  2 21:26:11.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:26:11.529: INFO: namespace: e2e-tests-projected-gfd7s, resource: bindings, ignored listing per whitelist
Jan  2 21:26:11.659: INFO: namespace e2e-tests-projected-gfd7s deletion completed in 6.222989627s

• [SLOW TEST:17.784 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:26:11.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-gjf9
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 21:26:12.150: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-gjf9" in namespace "e2e-tests-subpath-62m27" to be "success or failure"
Jan  2 21:26:12.162: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.134763ms
Jan  2 21:26:14.335: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184727442s
Jan  2 21:26:16.358: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207222878s
Jan  2 21:26:18.956: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.805351816s
Jan  2 21:26:21.004: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.853588096s
Jan  2 21:26:23.018: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.867490205s
Jan  2 21:26:25.358: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.207344038s
Jan  2 21:26:27.373: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.22253498s
Jan  2 21:26:29.392: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Running", Reason="", readiness=false. Elapsed: 17.241492951s
Jan  2 21:26:31.411: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Running", Reason="", readiness=false. Elapsed: 19.260383612s
Jan  2 21:26:33.429: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Running", Reason="", readiness=false. Elapsed: 21.27797963s
Jan  2 21:26:35.449: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Running", Reason="", readiness=false. Elapsed: 23.298202763s
Jan  2 21:26:37.465: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Running", Reason="", readiness=false. Elapsed: 25.314583961s
Jan  2 21:26:39.484: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Running", Reason="", readiness=false. Elapsed: 27.33345172s
Jan  2 21:26:41.502: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Running", Reason="", readiness=false. Elapsed: 29.351755599s
Jan  2 21:26:43.515: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Running", Reason="", readiness=false. Elapsed: 31.364621804s
Jan  2 21:26:45.531: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Running", Reason="", readiness=false. Elapsed: 33.380720638s
Jan  2 21:26:47.561: INFO: Pod "pod-subpath-test-projected-gjf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.410572198s
STEP: Saw pod success
Jan  2 21:26:47.561: INFO: Pod "pod-subpath-test-projected-gjf9" satisfied condition "success or failure"
Jan  2 21:26:47.685: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-gjf9 container test-container-subpath-projected-gjf9: 
STEP: delete the pod
Jan  2 21:26:47.994: INFO: Waiting for pod pod-subpath-test-projected-gjf9 to disappear
Jan  2 21:26:48.038: INFO: Pod pod-subpath-test-projected-gjf9 no longer exists
STEP: Deleting pod pod-subpath-test-projected-gjf9
Jan  2 21:26:48.038: INFO: Deleting pod "pod-subpath-test-projected-gjf9" in namespace "e2e-tests-subpath-62m27"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:26:48.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-62m27" for this suite.
Jan  2 21:26:54.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:26:54.502: INFO: namespace: e2e-tests-subpath-62m27, resource: bindings, ignored listing per whitelist
Jan  2 21:26:54.594: INFO: namespace e2e-tests-subpath-62m27 deletion completed in 6.431815547s

• [SLOW TEST:42.934 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:26:54.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan  2 21:26:54.876: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  2 21:26:54.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-l8vqd'
Jan  2 21:26:55.565: INFO: stderr: ""
Jan  2 21:26:55.565: INFO: stdout: "service/redis-slave created\n"
Jan  2 21:26:55.566: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  2 21:26:55.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-l8vqd'
Jan  2 21:26:56.136: INFO: stderr: ""
Jan  2 21:26:56.136: INFO: stdout: "service/redis-master created\n"
Jan  2 21:26:56.137: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  2 21:26:56.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-l8vqd'
Jan  2 21:26:56.631: INFO: stderr: ""
Jan  2 21:26:56.632: INFO: stdout: "service/frontend created\n"
Jan  2 21:26:56.633: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  2 21:26:56.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-l8vqd'
Jan  2 21:26:57.013: INFO: stderr: ""
Jan  2 21:26:57.013: INFO: stdout: "deployment.extensions/frontend created\n"
Jan  2 21:26:57.014: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  2 21:26:57.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-l8vqd'
Jan  2 21:26:57.903: INFO: stderr: ""
Jan  2 21:26:57.903: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan  2 21:26:57.905: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  2 21:26:57.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-l8vqd'
Jan  2 21:27:00.018: INFO: stderr: ""
Jan  2 21:27:00.019: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan  2 21:27:00.019: INFO: Waiting for all frontend pods to be Running.
Jan  2 21:27:30.073: INFO: Waiting for frontend to serve content.
Jan  2 21:27:30.255: INFO: Trying to add a new entry to the guestbook.
Jan  2 21:27:30.288: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  2 21:27:30.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-l8vqd'
Jan  2 21:27:30.679: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 21:27:30.679: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 21:27:30.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-l8vqd'
Jan  2 21:27:30.873: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 21:27:30.873: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 21:27:30.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-l8vqd'
Jan  2 21:27:31.147: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 21:27:31.147: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 21:27:31.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-l8vqd'
Jan  2 21:27:31.300: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 21:27:31.300: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 21:27:31.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-l8vqd'
Jan  2 21:27:31.609: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 21:27:31.609: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 21:27:31.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-l8vqd'
Jan  2 21:27:32.090: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 21:27:32.090: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:27:32.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-l8vqd" for this suite.
Jan  2 21:28:18.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:28:18.555: INFO: namespace: e2e-tests-kubectl-l8vqd, resource: bindings, ignored listing per whitelist
Jan  2 21:28:18.609: INFO: namespace e2e-tests-kubectl-l8vqd deletion completed in 46.474334407s

• [SLOW TEST:84.015 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:28:18.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan  2 21:28:19.857: INFO: created pod pod-service-account-defaultsa
Jan  2 21:28:19.858: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  2 21:28:19.887: INFO: created pod pod-service-account-mountsa
Jan  2 21:28:19.887: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  2 21:28:20.080: INFO: created pod pod-service-account-nomountsa
Jan  2 21:28:20.080: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  2 21:28:20.127: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  2 21:28:20.127: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  2 21:28:20.278: INFO: created pod pod-service-account-mountsa-mountspec
Jan  2 21:28:20.278: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  2 21:28:20.313: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  2 21:28:20.314: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  2 21:28:20.630: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  2 21:28:20.630: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  2 21:28:22.743: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  2 21:28:22.743: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  2 21:28:23.421: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  2 21:28:23.421: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:28:23.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-dvlhh" for this suite.
Jan  2 21:28:51.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:28:51.326: INFO: namespace: e2e-tests-svcaccounts-dvlhh, resource: bindings, ignored listing per whitelist
Jan  2 21:28:51.411: INFO: namespace e2e-tests-svcaccounts-dvlhh deletion completed in 27.315247615s

• [SLOW TEST:32.801 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:28:51.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-df02a614-2da6-11ea-814c-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 21:28:51.911: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-df12af58-2da6-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-fw8pv" to be "success or failure"
Jan  2 21:28:51.945: INFO: Pod "pod-projected-configmaps-df12af58-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.688707ms
Jan  2 21:28:54.333: INFO: Pod "pod-projected-configmaps-df12af58-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.421409277s
Jan  2 21:28:56.356: INFO: Pod "pod-projected-configmaps-df12af58-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445027517s
Jan  2 21:28:58.708: INFO: Pod "pod-projected-configmaps-df12af58-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.796802673s
Jan  2 21:29:00.805: INFO: Pod "pod-projected-configmaps-df12af58-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.894048194s
Jan  2 21:29:02.942: INFO: Pod "pod-projected-configmaps-df12af58-2da6-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.030569818s
STEP: Saw pod success
Jan  2 21:29:02.942: INFO: Pod "pod-projected-configmaps-df12af58-2da6-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:29:03.018: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-df12af58-2da6-11ea-814c-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 21:29:03.291: INFO: Waiting for pod pod-projected-configmaps-df12af58-2da6-11ea-814c-0242ac110005 to disappear
Jan  2 21:29:03.533: INFO: Pod pod-projected-configmaps-df12af58-2da6-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:29:03.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fw8pv" for this suite.
Jan  2 21:29:09.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:29:09.753: INFO: namespace: e2e-tests-projected-fw8pv, resource: bindings, ignored listing per whitelist
Jan  2 21:29:09.984: INFO: namespace e2e-tests-projected-fw8pv deletion completed in 6.431111684s

• [SLOW TEST:18.572 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:29:09.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-ea006bad-2da6-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 21:29:10.214: INFO: Waiting up to 5m0s for pod "pod-secrets-ea0106af-2da6-11ea-814c-0242ac110005" in namespace "e2e-tests-secrets-4c6qj" to be "success or failure"
Jan  2 21:29:10.223: INFO: Pod "pod-secrets-ea0106af-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.344202ms
Jan  2 21:29:12.247: INFO: Pod "pod-secrets-ea0106af-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032018945s
Jan  2 21:29:14.294: INFO: Pod "pod-secrets-ea0106af-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079118534s
Jan  2 21:29:16.521: INFO: Pod "pod-secrets-ea0106af-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.30658165s
Jan  2 21:29:18.552: INFO: Pod "pod-secrets-ea0106af-2da6-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.337200057s
Jan  2 21:29:20.570: INFO: Pod "pod-secrets-ea0106af-2da6-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.355842053s
STEP: Saw pod success
Jan  2 21:29:20.571: INFO: Pod "pod-secrets-ea0106af-2da6-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:29:20.590: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ea0106af-2da6-11ea-814c-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 21:29:21.487: INFO: Waiting for pod pod-secrets-ea0106af-2da6-11ea-814c-0242ac110005 to disappear
Jan  2 21:29:21.498: INFO: Pod pod-secrets-ea0106af-2da6-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:29:21.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4c6qj" for this suite.
Jan  2 21:29:29.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:29:29.896: INFO: namespace: e2e-tests-secrets-4c6qj, resource: bindings, ignored listing per whitelist
Jan  2 21:29:29.929: INFO: namespace e2e-tests-secrets-4c6qj deletion completed in 8.419904214s

• [SLOW TEST:19.944 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:29:29.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-mmww
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 21:29:30.335: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mmww" in namespace "e2e-tests-subpath-6s66m" to be "success or failure"
Jan  2 21:29:30.383: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Pending", Reason="", readiness=false. Elapsed: 47.79214ms
Jan  2 21:29:32.810: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474872827s
Jan  2 21:29:34.823: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Pending", Reason="", readiness=false. Elapsed: 4.487950841s
Jan  2 21:29:36.915: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579188833s
Jan  2 21:29:38.976: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Pending", Reason="", readiness=false. Elapsed: 8.640242498s
Jan  2 21:29:41.164: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Pending", Reason="", readiness=false. Elapsed: 10.828833701s
Jan  2 21:29:43.240: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Pending", Reason="", readiness=false. Elapsed: 12.904948659s
Jan  2 21:29:45.744: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Pending", Reason="", readiness=false. Elapsed: 15.408439018s
Jan  2 21:29:47.762: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Pending", Reason="", readiness=false. Elapsed: 17.426007852s
Jan  2 21:29:49.773: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Running", Reason="", readiness=false. Elapsed: 19.437526306s
Jan  2 21:29:51.795: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Running", Reason="", readiness=false. Elapsed: 21.459444464s
Jan  2 21:29:53.814: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Running", Reason="", readiness=false. Elapsed: 23.47820107s
Jan  2 21:29:55.840: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Running", Reason="", readiness=false. Elapsed: 25.504046146s
Jan  2 21:29:57.862: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Running", Reason="", readiness=false. Elapsed: 27.526188479s
Jan  2 21:29:59.875: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Running", Reason="", readiness=false. Elapsed: 29.5390406s
Jan  2 21:30:01.901: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Running", Reason="", readiness=false. Elapsed: 31.565848879s
Jan  2 21:30:03.929: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Running", Reason="", readiness=false. Elapsed: 33.593324446s
Jan  2 21:30:06.018: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Running", Reason="", readiness=false. Elapsed: 35.681974901s
Jan  2 21:30:08.130: INFO: Pod "pod-subpath-test-configmap-mmww": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.794434896s
STEP: Saw pod success
Jan  2 21:30:08.131: INFO: Pod "pod-subpath-test-configmap-mmww" satisfied condition "success or failure"
Jan  2 21:30:08.218: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-mmww container test-container-subpath-configmap-mmww: 
STEP: delete the pod
Jan  2 21:30:08.950: INFO: Waiting for pod pod-subpath-test-configmap-mmww to disappear
Jan  2 21:30:09.079: INFO: Pod pod-subpath-test-configmap-mmww no longer exists
STEP: Deleting pod pod-subpath-test-configmap-mmww
Jan  2 21:30:09.079: INFO: Deleting pod "pod-subpath-test-configmap-mmww" in namespace "e2e-tests-subpath-6s66m"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:30:09.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-6s66m" for this suite.
Jan  2 21:30:17.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:30:17.241: INFO: namespace: e2e-tests-subpath-6s66m, resource: bindings, ignored listing per whitelist
Jan  2 21:30:17.315: INFO: namespace e2e-tests-subpath-6s66m deletion completed in 8.208510369s

• [SLOW TEST:47.385 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:30:17.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-122ad67a-2da7-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 21:30:17.627: INFO: Waiting up to 5m0s for pod "pod-secrets-122b88f5-2da7-11ea-814c-0242ac110005" in namespace "e2e-tests-secrets-mgn69" to be "success or failure"
Jan  2 21:30:17.736: INFO: Pod "pod-secrets-122b88f5-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 109.33481ms
Jan  2 21:30:19.746: INFO: Pod "pod-secrets-122b88f5-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119562144s
Jan  2 21:30:21.764: INFO: Pod "pod-secrets-122b88f5-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136912213s
Jan  2 21:30:23.917: INFO: Pod "pod-secrets-122b88f5-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.290484603s
Jan  2 21:30:26.098: INFO: Pod "pod-secrets-122b88f5-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.471526627s
Jan  2 21:30:28.115: INFO: Pod "pod-secrets-122b88f5-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.488419439s
Jan  2 21:30:30.136: INFO: Pod "pod-secrets-122b88f5-2da7-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.508847664s
STEP: Saw pod success
Jan  2 21:30:30.136: INFO: Pod "pod-secrets-122b88f5-2da7-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:30:30.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-122b88f5-2da7-11ea-814c-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 21:30:30.690: INFO: Waiting for pod pod-secrets-122b88f5-2da7-11ea-814c-0242ac110005 to disappear
Jan  2 21:30:30.705: INFO: Pod pod-secrets-122b88f5-2da7-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:30:30.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-mgn69" for this suite.
Jan  2 21:30:36.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:30:37.067: INFO: namespace: e2e-tests-secrets-mgn69, resource: bindings, ignored listing per whitelist
Jan  2 21:30:37.116: INFO: namespace e2e-tests-secrets-mgn69 deletion completed in 6.378601409s

• [SLOW TEST:19.800 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:30:37.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-1df3e114-2da7-11ea-814c-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 21:30:37.388: INFO: Waiting up to 5m0s for pod "pod-secrets-1df63f10-2da7-11ea-814c-0242ac110005" in namespace "e2e-tests-secrets-59qnk" to be "success or failure"
Jan  2 21:30:37.432: INFO: Pod "pod-secrets-1df63f10-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.123984ms
Jan  2 21:30:39.555: INFO: Pod "pod-secrets-1df63f10-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166986901s
Jan  2 21:30:41.566: INFO: Pod "pod-secrets-1df63f10-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177702702s
Jan  2 21:30:44.070: INFO: Pod "pod-secrets-1df63f10-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.682395878s
Jan  2 21:30:46.906: INFO: Pod "pod-secrets-1df63f10-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.517982143s
Jan  2 21:30:48.926: INFO: Pod "pod-secrets-1df63f10-2da7-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.537896655s
STEP: Saw pod success
Jan  2 21:30:48.926: INFO: Pod "pod-secrets-1df63f10-2da7-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:30:48.955: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1df63f10-2da7-11ea-814c-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 21:30:49.116: INFO: Waiting for pod pod-secrets-1df63f10-2da7-11ea-814c-0242ac110005 to disappear
Jan  2 21:30:49.142: INFO: Pod pod-secrets-1df63f10-2da7-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:30:49.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-59qnk" for this suite.
Jan  2 21:30:55.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:30:55.519: INFO: namespace: e2e-tests-secrets-59qnk, resource: bindings, ignored listing per whitelist
Jan  2 21:30:55.531: INFO: namespace e2e-tests-secrets-59qnk deletion completed in 6.355946994s

• [SLOW TEST:18.414 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:30:55.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-s56fg
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 21:30:55.859: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 21:31:36.330: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-s56fg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 21:31:36.330: INFO: >>> kubeConfig: /root/.kube/config
I0102 21:31:36.452155       8 log.go:172] (0xc0011c2420) (0xc0019dc8c0) Create stream
I0102 21:31:36.452336       8 log.go:172] (0xc0011c2420) (0xc0019dc8c0) Stream added, broadcasting: 1
I0102 21:31:36.464804       8 log.go:172] (0xc0011c2420) Reply frame received for 1
I0102 21:31:36.465026       8 log.go:172] (0xc0011c2420) (0xc00242cb40) Create stream
I0102 21:31:36.465099       8 log.go:172] (0xc0011c2420) (0xc00242cb40) Stream added, broadcasting: 3
I0102 21:31:36.469489       8 log.go:172] (0xc0011c2420) Reply frame received for 3
I0102 21:31:36.469594       8 log.go:172] (0xc0011c2420) (0xc0026be780) Create stream
I0102 21:31:36.469630       8 log.go:172] (0xc0011c2420) (0xc0026be780) Stream added, broadcasting: 5
I0102 21:31:36.472416       8 log.go:172] (0xc0011c2420) Reply frame received for 5
I0102 21:31:36.861266       8 log.go:172] (0xc0011c2420) Data frame received for 3
I0102 21:31:36.861478       8 log.go:172] (0xc00242cb40) (3) Data frame handling
I0102 21:31:36.861539       8 log.go:172] (0xc00242cb40) (3) Data frame sent
I0102 21:31:37.054597       8 log.go:172] (0xc0011c2420) Data frame received for 1
I0102 21:31:37.055070       8 log.go:172] (0xc0019dc8c0) (1) Data frame handling
I0102 21:31:37.055134       8 log.go:172] (0xc0019dc8c0) (1) Data frame sent
I0102 21:31:37.055456       8 log.go:172] (0xc0011c2420) (0xc0026be780) Stream removed, broadcasting: 5
I0102 21:31:37.055621       8 log.go:172] (0xc0011c2420) (0xc00242cb40) Stream removed, broadcasting: 3
I0102 21:31:37.055693       8 log.go:172] (0xc0011c2420) (0xc0019dc8c0) Stream removed, broadcasting: 1
I0102 21:31:37.055734       8 log.go:172] (0xc0011c2420) Go away received
I0102 21:31:37.056076       8 log.go:172] (0xc0011c2420) (0xc0019dc8c0) Stream removed, broadcasting: 1
I0102 21:31:37.056102       8 log.go:172] (0xc0011c2420) (0xc00242cb40) Stream removed, broadcasting: 3
I0102 21:31:37.056127       8 log.go:172] (0xc0011c2420) (0xc0026be780) Stream removed, broadcasting: 5
Jan  2 21:31:37.056: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:31:37.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-s56fg" for this suite.
Jan  2 21:32:01.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:32:01.208: INFO: namespace: e2e-tests-pod-network-test-s56fg, resource: bindings, ignored listing per whitelist
Jan  2 21:32:01.314: INFO: namespace e2e-tests-pod-network-test-s56fg deletion completed in 24.234262597s

• [SLOW TEST:65.783 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:32:01.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 21:32:01.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-knv9f'
Jan  2 21:32:01.789: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 21:32:01.789: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan  2 21:32:01.839: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan  2 21:32:01.895: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan  2 21:32:02.148: INFO: scanned /root for discovery docs: 
Jan  2 21:32:02.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-knv9f'
Jan  2 21:32:30.194: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  2 21:32:30.195: INFO: stdout: "Created e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87\nScaling up e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  2 21:32:30.195: INFO: stdout: "Created e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87\nScaling up e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  2 21:32:30.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-knv9f'
Jan  2 21:32:30.350: INFO: stderr: ""
Jan  2 21:32:30.351: INFO: stdout: "e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87-w7ppl e2e-test-nginx-rc-lbt56 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  2 21:32:35.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-knv9f'
Jan  2 21:32:35.562: INFO: stderr: ""
Jan  2 21:32:35.562: INFO: stdout: "e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87-w7ppl "
Jan  2 21:32:35.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87-w7ppl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-knv9f'
Jan  2 21:32:35.732: INFO: stderr: ""
Jan  2 21:32:35.732: INFO: stdout: "true"
Jan  2 21:32:35.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87-w7ppl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-knv9f'
Jan  2 21:32:35.884: INFO: stderr: ""
Jan  2 21:32:35.884: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  2 21:32:35.884: INFO: e2e-test-nginx-rc-d7f0a33f874dc7ed74552f9c6c0c3f87-w7ppl is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan  2 21:32:35.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-knv9f'
Jan  2 21:32:36.085: INFO: stderr: ""
Jan  2 21:32:36.086: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:32:36.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-knv9f" for this suite.
Jan  2 21:33:00.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:33:00.318: INFO: namespace: e2e-tests-kubectl-knv9f, resource: bindings, ignored listing per whitelist
Jan  2 21:33:00.493: INFO: namespace e2e-tests-kubectl-knv9f deletion completed in 24.399070421s

• [SLOW TEST:59.178 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:33:00.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-73689e0f-2da7-11ea-814c-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:33:12.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mppgr" for this suite.
Jan  2 21:33:36.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:33:37.088: INFO: namespace: e2e-tests-configmap-mppgr, resource: bindings, ignored listing per whitelist
Jan  2 21:33:37.137: INFO: namespace e2e-tests-configmap-mppgr deletion completed in 24.219138789s

• [SLOW TEST:36.643 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:33:37.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 21:33:37.354: INFO: Waiting up to 5m0s for pod "downwardapi-volume-892d9de8-2da7-11ea-814c-0242ac110005" in namespace "e2e-tests-projected-jl7fb" to be "success or failure"
Jan  2 21:33:37.362: INFO: Pod "downwardapi-volume-892d9de8-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.818433ms
Jan  2 21:33:39.804: INFO: Pod "downwardapi-volume-892d9de8-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.450060383s
Jan  2 21:33:41.814: INFO: Pod "downwardapi-volume-892d9de8-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.460191233s
Jan  2 21:33:44.592: INFO: Pod "downwardapi-volume-892d9de8-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.238351609s
Jan  2 21:33:46.610: INFO: Pod "downwardapi-volume-892d9de8-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.255490673s
Jan  2 21:33:48.632: INFO: Pod "downwardapi-volume-892d9de8-2da7-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.277586393s
STEP: Saw pod success
Jan  2 21:33:48.632: INFO: Pod "downwardapi-volume-892d9de8-2da7-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:33:48.732: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-892d9de8-2da7-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 21:33:49.036: INFO: Waiting for pod downwardapi-volume-892d9de8-2da7-11ea-814c-0242ac110005 to disappear
Jan  2 21:33:49.049: INFO: Pod downwardapi-volume-892d9de8-2da7-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:33:49.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jl7fb" for this suite.
Jan  2 21:33:55.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:33:55.176: INFO: namespace: e2e-tests-projected-jl7fb, resource: bindings, ignored listing per whitelist
Jan  2 21:33:55.296: INFO: namespace e2e-tests-projected-jl7fb deletion completed in 6.228460333s

• [SLOW TEST:18.159 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:33:55.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 21:33:55.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94112f0a-2da7-11ea-814c-0242ac110005" in namespace "e2e-tests-downward-api-smv7x" to be "success or failure"
Jan  2 21:33:55.552: INFO: Pod "downwardapi-volume-94112f0a-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.491737ms
Jan  2 21:33:57.566: INFO: Pod "downwardapi-volume-94112f0a-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031613761s
Jan  2 21:33:59.594: INFO: Pod "downwardapi-volume-94112f0a-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0592739s
Jan  2 21:34:02.465: INFO: Pod "downwardapi-volume-94112f0a-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.930643399s
Jan  2 21:34:04.485: INFO: Pod "downwardapi-volume-94112f0a-2da7-11ea-814c-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.949957771s
Jan  2 21:34:06.505: INFO: Pod "downwardapi-volume-94112f0a-2da7-11ea-814c-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.970599931s
STEP: Saw pod success
Jan  2 21:34:06.506: INFO: Pod "downwardapi-volume-94112f0a-2da7-11ea-814c-0242ac110005" satisfied condition "success or failure"
Jan  2 21:34:06.530: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-94112f0a-2da7-11ea-814c-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 21:34:06.750: INFO: Waiting for pod downwardapi-volume-94112f0a-2da7-11ea-814c-0242ac110005 to disappear
Jan  2 21:34:07.839: INFO: Pod downwardapi-volume-94112f0a-2da7-11ea-814c-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:34:07.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-smv7x" for this suite.
Jan  2 21:34:16.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:34:16.643: INFO: namespace: e2e-tests-downward-api-smv7x, resource: bindings, ignored listing per whitelist
Jan  2 21:34:16.806: INFO: namespace e2e-tests-downward-api-smv7x deletion completed in 8.587613408s

• [SLOW TEST:21.509 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:34:16.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan  2 21:34:29.252: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:34:56.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-9ddh8" for this suite.
Jan  2 21:35:02.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:35:02.737: INFO: namespace: e2e-tests-namespaces-9ddh8, resource: bindings, ignored listing per whitelist
Jan  2 21:35:02.805: INFO: namespace e2e-tests-namespaces-9ddh8 deletion completed in 6.296547218s
STEP: Destroying namespace "e2e-tests-nsdeletetest-wgvwt" for this suite.
Jan  2 21:35:02.808: INFO: Namespace e2e-tests-nsdeletetest-wgvwt was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-n7bzv" for this suite.
Jan  2 21:35:08.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:35:09.001: INFO: namespace: e2e-tests-nsdeletetest-n7bzv, resource: bindings, ignored listing per whitelist
Jan  2 21:35:09.026: INFO: namespace e2e-tests-nsdeletetest-n7bzv deletion completed in 6.218598055s

• [SLOW TEST:52.220 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:35:09.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:35:09.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-6pgtx" for this suite.
Jan  2 21:35:15.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:35:15.763: INFO: namespace: e2e-tests-kubelet-test-6pgtx, resource: bindings, ignored listing per whitelist
Jan  2 21:35:15.871: INFO: namespace e2e-tests-kubelet-test-6pgtx deletion completed in 6.296782666s

• [SLOW TEST:6.844 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:35:15.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 21:35:16.259: INFO: Creating deployment "test-recreate-deployment"
Jan  2 21:35:16.288: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  2 21:35:16.300: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan  2 21:35:18.379: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  2 21:35:18.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 21:35:20.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 21:35:22.590: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 21:35:24.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 21:35:26.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597716, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 21:35:28.399: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  2 21:35:28.421: INFO: Updating deployment test-recreate-deployment
Jan  2 21:35:28.421: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 21:35:28.978: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-hvqsf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hvqsf/deployments/test-recreate-deployment,UID:c431fbcf-2da7-11ea-a994-fa163e34d433,ResourceVersion:16969855,Generation:2,CreationTimestamp:2020-01-02 21:35:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-02 21:35:28 +0000 UTC 2020-01-02 21:35:28 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-02 21:35:28 +0000 UTC 2020-01-02 21:35:16 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  2 21:35:28.997: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-hvqsf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hvqsf/replicasets/test-recreate-deployment-589c4bfd,UID:cb90dd08-2da7-11ea-a994-fa163e34d433,ResourceVersion:16969852,Generation:1,CreationTimestamp:2020-01-02 21:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c431fbcf-2da7-11ea-a994-fa163e34d433 0xc001c89a2f 0xc001c89a40}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 21:35:28.998: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  2 21:35:28.999: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-hvqsf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hvqsf/replicasets/test-recreate-deployment-5bf7f65dc,UID:c437c67d-2da7-11ea-a994-fa163e34d433,ResourceVersion:16969843,Generation:2,CreationTimestamp:2020-01-02 21:35:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c431fbcf-2da7-11ea-a994-fa163e34d433 0xc001c89f90 0xc001c89f91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 21:35:29.023: INFO: Pod "test-recreate-deployment-589c4bfd-f4tpf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-f4tpf,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-hvqsf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hvqsf/pods/test-recreate-deployment-589c4bfd-f4tpf,UID:cb9582fc-2da7-11ea-a994-fa163e34d433,ResourceVersion:16969850,Generation:0,CreationTimestamp:2020-01-02 21:35:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd cb90dd08-2da7-11ea-a994-fa163e34d433 0xc001ba9aef 0xc001ba9b00}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq9zx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq9zx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq9zx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ba9b70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ba9d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 21:35:28 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:35:29.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-hvqsf" for this suite.
Jan  2 21:35:37.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:35:37.312: INFO: namespace: e2e-tests-deployment-hvqsf, resource: bindings, ignored listing per whitelist
Jan  2 21:35:37.312: INFO: namespace e2e-tests-deployment-hvqsf deletion completed in 8.161493122s

• [SLOW TEST:21.441 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:35:37.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 21:36:04.261: INFO: Container started at 2020-01-02 21:35:46 +0000 UTC, pod became ready at 2020-01-02 21:36:03 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:36:04.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-wsc9z" for this suite.
Jan  2 21:36:26.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:36:26.752: INFO: namespace: e2e-tests-container-probe-wsc9z, resource: bindings, ignored listing per whitelist
Jan  2 21:36:26.889: INFO: namespace e2e-tests-container-probe-wsc9z deletion completed in 22.61863675s

• [SLOW TEST:49.577 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:36:26.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-ee67928c-2da7-11ea-814c-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-ee67928c-2da7-11ea-814c-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:37:53.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-j2kr9" for this suite.
Jan  2 21:38:19.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:38:19.547: INFO: namespace: e2e-tests-projected-j2kr9, resource: bindings, ignored listing per whitelist
Jan  2 21:38:19.583: INFO: namespace e2e-tests-projected-j2kr9 deletion completed in 26.25732892s

• [SLOW TEST:112.694 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:38:19.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  2 21:38:19.751: INFO: PodSpec: initContainers in spec.initContainers
Jan  2 21:39:30.751: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-31904885-2da8-11ea-814c-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-d9m72", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-d9m72/pods/pod-init-31904885-2da8-11ea-814c-0242ac110005", UID:"3199eca9-2da8-11ea-a994-fa163e34d433", ResourceVersion:"16970245", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713597899, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"751664698"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-55hnn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001ac2040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-55hnn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-55hnn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-55hnn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001a14458), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001dbc300), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a14600)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a14630)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001a14638), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001a1463c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597899, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597899, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597899, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713597899, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001cf8060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001dc0070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001dc00e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://4e4354b8630fb5cff722edf9491b60c3f4cef1e23accb9d1413f8f293e501acd"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001cf80c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001cf8080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:39:30.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-d9m72" for this suite.
Jan  2 21:39:54.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:39:55.048: INFO: namespace: e2e-tests-init-container-d9m72, resource: bindings, ignored listing per whitelist
Jan  2 21:39:55.437: INFO: namespace e2e-tests-init-container-d9m72 deletion completed in 24.559348183s

• [SLOW TEST:95.853 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:39:55.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0102 21:40:37.472806       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 21:40:37.473: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:40:37.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-kgbtc" for this suite.
Jan  2 21:40:49.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:40:49.702: INFO: namespace: e2e-tests-gc-kgbtc, resource: bindings, ignored listing per whitelist
Jan  2 21:40:49.764: INFO: namespace e2e-tests-gc-kgbtc deletion completed in 12.285289591s

• [SLOW TEST:54.327 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 21:40:49.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  2 21:41:16.125: INFO: Successfully updated pod "annotationupdate8bd4da4b-2da8-11ea-814c-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 21:41:18.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gzq5r" for this suite.
Jan  2 21:41:42.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 21:41:42.550: INFO: namespace: e2e-tests-projected-gzq5r, resource: bindings, ignored listing per whitelist
Jan  2 21:41:42.756: INFO: namespace e2e-tests-projected-gzq5r deletion completed in 24.411530539s

• [SLOW TEST:52.992 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSJan  2 21:41:42.758: INFO: Running AfterSuite actions on all nodes
Jan  2 21:41:42.758: INFO: Running AfterSuite actions on node 1
Jan  2 21:41:42.759: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 9263.544 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS