I0627 17:49:07.468540 8 e2e.go:224] Starting e2e run "dc35f27e-9903-11e9-8fa9-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1561657746 - Will randomize all specs Will run 201 of 2162 specs Jun 27 17:49:07.679: INFO: >>> kubeConfig: /root/.kube/config Jun 27 17:49:07.683: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 27 17:49:07.701: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 27 17:49:07.739: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 27 17:49:07.739: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 27 17:49:07.739: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 27 17:49:07.748: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 27 17:49:07.748: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jun 27 17:49:07.748: INFO: e2e test version: v1.13.7 Jun 27 17:49:07.750: INFO: kube-apiserver version: v1.13.7 SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:49:07.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook Jun 27 17:49:07.929: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 27 17:49:16.067: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 27 17:49:16.074: INFO: Pod pod-with-prestop-http-hook still exists Jun 27 17:49:18.074: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 27 17:49:18.080: INFO: Pod pod-with-prestop-http-hook still exists Jun 27 17:49:20.074: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 27 17:49:20.077: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:49:20.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-nzhhp" for this suite. Jun 27 17:49:42.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:49:42.151: INFO: namespace: e2e-tests-container-lifecycle-hook-nzhhp, resource: bindings, ignored listing per whitelist Jun 27 17:49:42.201: INFO: namespace e2e-tests-container-lifecycle-hook-nzhhp deletion completed in 22.111316092s • [SLOW TEST:34.451 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:49:42.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-f144a5ac-9903-11e9-8fa9-0242ac110005 STEP: Creating a pod to test consume configMaps Jun 27 17:49:42.399: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f14e3292-9903-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-ft9x7" to be "success or failure" Jun 27 17:49:42.421: INFO: Pod "pod-projected-configmaps-f14e3292-9903-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.800058ms Jun 27 17:49:44.460: INFO: Pod "pod-projected-configmaps-f14e3292-9903-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061309956s Jun 27 17:49:46.465: INFO: Pod "pod-projected-configmaps-f14e3292-9903-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066345755s STEP: Saw pod success Jun 27 17:49:46.465: INFO: Pod "pod-projected-configmaps-f14e3292-9903-11e9-8fa9-0242ac110005" satisfied condition "success or failure" Jun 27 17:49:46.468: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-f14e3292-9903-11e9-8fa9-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jun 27 17:49:46.499: INFO: Waiting for pod pod-projected-configmaps-f14e3292-9903-11e9-8fa9-0242ac110005 to disappear Jun 27 17:49:46.504: INFO: Pod pod-projected-configmaps-f14e3292-9903-11e9-8fa9-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:49:46.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ft9x7" for this suite. Jun 27 17:49:52.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:49:52.633: INFO: namespace: e2e-tests-projected-ft9x7, resource: bindings, ignored listing per whitelist Jun 27 17:49:52.657: INFO: namespace e2e-tests-projected-ft9x7 deletion completed in 6.149040207s • [SLOW TEST:10.456 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:49:52.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f77c00db-9903-11e9-8fa9-0242ac110005 STEP: Creating a pod to test consume secrets Jun 27 17:49:52.775: INFO: Waiting up to 5m0s for pod "pod-secrets-f77d0d9f-9903-11e9-8fa9-0242ac110005" in namespace "e2e-tests-secrets-kxqtw" to be "success or failure" Jun 27 17:49:52.855: INFO: Pod "pod-secrets-f77d0d9f-9903-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 79.533344ms Jun 27 17:49:54.861: INFO: Pod "pod-secrets-f77d0d9f-9903-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085633189s Jun 27 17:49:56.867: INFO: Pod "pod-secrets-f77d0d9f-9903-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091265813s STEP: Saw pod success Jun 27 17:49:56.867: INFO: Pod "pod-secrets-f77d0d9f-9903-11e9-8fa9-0242ac110005" satisfied condition "success or failure" Jun 27 17:49:56.874: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-f77d0d9f-9903-11e9-8fa9-0242ac110005 container secret-volume-test: STEP: delete the pod Jun 27 17:49:56.920: INFO: Waiting for pod pod-secrets-f77d0d9f-9903-11e9-8fa9-0242ac110005 to disappear Jun 27 17:49:57.356: INFO: Pod pod-secrets-f77d0d9f-9903-11e9-8fa9-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:49:57.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kxqtw" for this suite. Jun 27 17:50:03.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:50:03.426: INFO: namespace: e2e-tests-secrets-kxqtw, resource: bindings, ignored listing per whitelist Jun 27 17:50:03.533: INFO: namespace e2e-tests-secrets-kxqtw deletion completed in 6.169440862s • [SLOW TEST:10.876 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:50:03.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 27 17:50:03.878: INFO: Creating deployment "nginx-deployment" Jun 27 17:50:03.895: INFO: Waiting for observed generation 1 Jun 27 17:50:05.918: INFO: Waiting for all required pods to come up Jun 27 17:50:05.924: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 27 17:50:19.958: INFO: Waiting for deployment "nginx-deployment" to complete Jun 27 17:50:19.972: INFO: Updating deployment "nginx-deployment" with a non-existent image Jun 27 17:50:19.998: INFO: Updating deployment nginx-deployment Jun 27 17:50:19.998: INFO: Waiting for observed generation 2 Jun 27 17:50:22.058: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 27 17:50:22.062: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 27 17:50:22.103: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 27 17:50:22.117: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 27 17:50:22.117: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 27 17:50:22.123: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 27 17:50:22.397: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jun 27 17:50:22.397: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jun 27 17:50:22.416: INFO: Updating deployment nginx-deployment Jun 27 17:50:22.416: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jun 27 17:50:22.701: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 27 17:50:22.888: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 27 17:50:25.421: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-qft6b,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qft6b/deployments/nginx-deployment,UID:fe1f3cb1-9903-11e9-a678-fa163e0cec1d,ResourceVersion:1366614,Generation:3,CreationTimestamp:2019-06-27 17:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-06-27 17:50:22 +0000 UTC 2019-06-27 17:50:22 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-06-27 17:50:23 +0000 UTC 2019-06-27 17:50:03 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-65bbdb5f8" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jun 27 17:50:25.715: INFO: New ReplicaSet "nginx-deployment-65bbdb5f8" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8,GenerateName:,Namespace:e2e-tests-deployment-qft6b,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qft6b/replicasets/nginx-deployment-65bbdb5f8,UID:07bb02c5-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366607,Generation:3,CreationTimestamp:2019-06-27 17:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment fe1f3cb1-9903-11e9-a678-fa163e0cec1d 0xc001361737 0xc001361738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 27 17:50:25.715: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jun 27 17:50:25.716: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965,GenerateName:,Namespace:e2e-tests-deployment-qft6b,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qft6b/replicasets/nginx-deployment-555b55d965,UID:fe22b5d7-9903-11e9-a678-fa163e0cec1d,ResourceVersion:1366601,Generation:3,CreationTimestamp:2019-06-27 17:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment fe1f3cb1-9903-11e9-a678-fa163e0cec1d 0xc001361677 0xc001361678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jun 27 17:50:25.921: INFO: Pod "nginx-deployment-555b55d965-7zlpm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-7zlpm,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-7zlpm,UID:fe4b00ed-9903-11e9-a678-fa163e0cec1d,ResourceVersion:1366472,Generation:0,CreationTimestamp:2019-06-27 17:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083ec77 0xc00083ec78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083ece0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083ed00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.10,StartTime:2019-06-27 17:50:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-27 17:50:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://760c7399f91ee859f2c42f5be3acb401bb6f9b93fd4e5bd05037fc1d7bacd10d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.921: INFO: Pod "nginx-deployment-555b55d965-97k2b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-97k2b,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-97k2b,UID:0959d9a0-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366572,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083ee47 0xc00083ee48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083eeb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083eed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.921: INFO: Pod "nginx-deployment-555b55d965-dbvdl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-dbvdl,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-dbvdl,UID:fe40046b-9903-11e9-a678-fa163e0cec1d,ResourceVersion:1366477,Generation:0,CreationTimestamp:2019-06-27 17:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083efb7 0xc00083efb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083f020} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083f040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.12,StartTime:2019-06-27 17:50:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-27 17:50:14 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e5b26fb687bca424fb5a7945ad42406c82f2601a5ce419b9cb06eb7c297804fd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.922: INFO: Pod "nginx-deployment-555b55d965-dfdk9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-dfdk9,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-dfdk9,UID:0977ff3d-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366581,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083f187 0xc00083f188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083f1f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083f210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.922: INFO: Pod "nginx-deployment-555b55d965-dvxkr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-dvxkr,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-dvxkr,UID:0977ef31-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366594,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083f287 0xc00083f288}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083f3a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083f3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.922: INFO: Pod "nginx-deployment-555b55d965-f26kw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-f26kw,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-f26kw,UID:0977d65c-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366595,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083f437 0xc00083f438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083f4a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083f600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.922: INFO: Pod "nginx-deployment-555b55d965-fks47" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-fks47,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-fks47,UID:0977eaaa-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366593,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083f677 0xc00083f678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083f6e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083f700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.922: INFO: Pod "nginx-deployment-555b55d965-g2zxv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-g2zxv,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-g2zxv,UID:094f741c-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366618,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083f857 0xc00083f858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083f8c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083f8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:,StartTime:2019-06-27 17:50:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.922: INFO: Pod "nginx-deployment-555b55d965-jl2kf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-jl2kf,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-jl2kf,UID:fe3fb357-9903-11e9-a678-fa163e0cec1d,ResourceVersion:1366453,Generation:0,CreationTimestamp:2019-06-27 17:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083f997 0xc00083f998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083fa00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083fa20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.9,StartTime:2019-06-27 17:50:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-27 17:50:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cd3893eaafd9d804dd32979aaa7853d50f8ca5e8e1826fbd6c12e08fa921df86}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.923: INFO: Pod "nginx-deployment-555b55d965-k2qnw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-k2qnw,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-k2qnw,UID:094f5686-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366557,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083fae7 0xc00083fae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083fb50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083fb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.923: INFO: Pod "nginx-deployment-555b55d965-lkn94" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-lkn94,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-lkn94,UID:095974f7-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366573,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083fbe7 0xc00083fbe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083fc50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083fc70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.923: INFO: Pod "nginx-deployment-555b55d965-nfr5n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-nfr5n,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-nfr5n,UID:fe3acbb5-9903-11e9-a678-fa163e0cec1d,ResourceVersion:1366450,Generation:0,CreationTimestamp:2019-06-27 17:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083fce7 0xc00083fce8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083fd50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083fd70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.7,StartTime:2019-06-27 17:50:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-27 17:50:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e890f1b508d9dadcdf87c23dda0002eba36d68a902e342658a7a8ca0b5f6dce4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.923: INFO: Pod "nginx-deployment-555b55d965-pkzpj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-pkzpj,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-pkzpj,UID:fe3cb278-9903-11e9-a678-fa163e0cec1d,ResourceVersion:1366446,Generation:0,CreationTimestamp:2019-06-27 17:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083fe37 0xc00083fe38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083fea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00083fec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.5,StartTime:2019-06-27 17:50:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-27 17:50:10 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://dcb0994d294a3c9db1d02772071fd50459529781aa049057f9adfefcd5b32b61}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.923: INFO: Pod "nginx-deployment-555b55d965-q9tbj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-q9tbj,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-q9tbj,UID:094bb549-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366589,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc00083ff87 0xc00083ff88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00083fff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015be010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:,StartTime:2019-06-27 17:50:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.924: INFO: Pod "nginx-deployment-555b55d965-rqtm9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-rqtm9,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-rqtm9,UID:0959a8ce-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366569,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc0015be0c7 0xc0015be0c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015be130} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015be160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.924: INFO: Pod "nginx-deployment-555b55d965-sh9tg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-sh9tg,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-sh9tg,UID:fe3cb72b-9903-11e9-a678-fa163e0cec1d,ResourceVersion:1366466,Generation:0,CreationTimestamp:2019-06-27 17:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc0015be1d7 0xc0015be1d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015be240} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015be260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.4,StartTime:2019-06-27 17:50:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-27 17:50:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b067f504e179ab4182c0350c4881174a537bb0f1c9eaeab0b3e279f48b4c155a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.924: INFO: Pod "nginx-deployment-555b55d965-tfzr9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-tfzr9,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-tfzr9,UID:fe4b08a6-9903-11e9-a678-fa163e0cec1d,ResourceVersion:1366442,Generation:0,CreationTimestamp:2019-06-27 17:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc0015be327 0xc0015be328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015be590} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015be5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.8,StartTime:2019-06-27 17:50:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-27 17:50:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2c142ea9b531b2a619843fceb1c080a86f1d4a676922bdc0ec82d6c23322a667}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.924: INFO: Pod "nginx-deployment-555b55d965-vkhqg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-vkhqg,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-vkhqg,UID:fe3f5cee-9903-11e9-a678-fa163e0cec1d,ResourceVersion:1366461,Generation:0,CreationTimestamp:2019-06-27 17:50:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc0015be677 0xc0015be678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015be6e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015be700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:04 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.6,StartTime:2019-06-27 17:50:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-27 17:50:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b5a1b6b33978358155ba617232794976b79302715f67ba4d8e1465171dd518ac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.925: INFO: Pod "nginx-deployment-555b55d965-vx6n7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-vx6n7,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-vx6n7,UID:0959c538-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366570,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc0015bea87 0xc0015bea88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015beaf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015beb10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.925: INFO: Pod "nginx-deployment-555b55d965-wp4bv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-wp4bv,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-555b55d965-wp4bv,UID:0977ed96-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366592,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 fe22b5d7-9903-11e9-a678-fa163e0cec1d 0xc0015bed27 0xc0015bed28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015bed90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015bedc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.925: INFO: Pod "nginx-deployment-65bbdb5f8-26wjv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-26wjv,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-26wjv,UID:07fdf964-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366546,Generation:0,CreationTimestamp:2019-06-27 17:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0015bee37 0xc0015bee38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015bef00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015bef20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:,StartTime:2019-06-27 17:50:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.925: INFO: Pod "nginx-deployment-65bbdb5f8-8whjn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-8whjn,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-8whjn,UID:0997eaa7-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366603,Generation:0,CreationTimestamp:2019-06-27 17:50:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0015beff7 0xc0015beff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015bf090} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015bf0b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.925: INFO: Pod "nginx-deployment-65bbdb5f8-fqszh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-fqszh,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-fqszh,UID:07bfd2f3-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366537,Generation:0,CreationTimestamp:2019-06-27 17:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0015bf137 0xc0015bf138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015bf1a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015bf1c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:,StartTime:2019-06-27 17:50:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.926: INFO: Pod "nginx-deployment-65bbdb5f8-gn92c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-gn92c,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-gn92c,UID:09773dd6-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366580,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0015bf317 0xc0015bf318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015bf380} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015bf410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.926: INFO: Pod "nginx-deployment-65bbdb5f8-h695q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-h695q,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-h695q,UID:07bfafc5-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366526,Generation:0,CreationTimestamp:2019-06-27 17:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0015bf487 0xc0015bf488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015bf4f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015bf510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:,StartTime:2019-06-27 17:50:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.926: INFO: Pod "nginx-deployment-65bbdb5f8-jdwpm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-jdwpm,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-jdwpm,UID:09825006-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366602,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0015bf647 0xc0015bf648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015bf6c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015bf6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.926: INFO: Pod "nginx-deployment-65bbdb5f8-jpp7m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-jpp7m,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-jpp7m,UID:09761b20-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366579,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0015bf757 0xc0015bf758}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015bf850} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015bf870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.926: INFO: Pod "nginx-deployment-65bbdb5f8-mbrhq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-mbrhq,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-mbrhq,UID:09828fe4-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366598,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0015bf8f7 0xc0015bf8f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015bf960} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015bfa30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.927: INFO: Pod "nginx-deployment-65bbdb5f8-mjlk9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-mjlk9,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-mjlk9,UID:07f6d9b2-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366542,Generation:0,CreationTimestamp:2019-06-27 17:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0015bfaa7 0xc0015bfaa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015bfb20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015bfb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:,StartTime:2019-06-27 17:50:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.927: INFO: Pod "nginx-deployment-65bbdb5f8-qkbf4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-qkbf4,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-qkbf4,UID:09828460-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366597,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0015bfd47 0xc0015bfd48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015bfdc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015bfde0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.927: INFO: Pod "nginx-deployment-65bbdb5f8-vb6q9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-vb6q9,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-vb6q9,UID:07bc3af8-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366505,Generation:0,CreationTimestamp:2019-06-27 17:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0015bfe57 0xc0015bfe58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015bff30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015bff50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:20 +0000 UTC }],Message:,Reason:,HostIP:192.168.100.12,PodIP:,StartTime:2019-06-27 17:50:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.927: INFO: Pod "nginx-deployment-65bbdb5f8-w29jw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-w29jw,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-w29jw,UID:0957be4a-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366566,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0018f8017 0xc0018f8018}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018f8080} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018f80a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 27 17:50:25.927: INFO: Pod "nginx-deployment-65bbdb5f8-wp4l8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-wp4l8,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-qft6b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qft6b/pods/nginx-deployment-65bbdb5f8-wp4l8,UID:09826676-9904-11e9-a678-fa163e0cec1d,ResourceVersion:1366596,Generation:0,CreationTimestamp:2019-06-27 17:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 07bb02c5-9904-11e9-a678-fa163e0cec1d 0xc0018f8117 0xc0018f8118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2k4pd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2k4pd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2k4pd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018f8180} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018f81a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 17:50:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:50:25.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-qft6b" for this suite. Jun 27 17:50:46.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:50:46.549: INFO: namespace: e2e-tests-deployment-qft6b, resource: bindings, ignored listing per whitelist Jun 27 17:50:46.615: INFO: namespace e2e-tests-deployment-qft6b deletion completed in 20.422386102s • [SLOW TEST:43.081 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:50:46.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 27 17:50:47.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17ea8460-9904-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-lsqzv" to be "success or failure" Jun 27 17:50:47.621: INFO: Pod "downwardapi-volume-17ea8460-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 384.407656ms Jun 27 17:50:49.629: INFO: Pod "downwardapi-volume-17ea8460-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393054149s Jun 27 17:50:51.640: INFO: Pod "downwardapi-volume-17ea8460-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403300361s Jun 27 17:50:53.645: INFO: Pod "downwardapi-volume-17ea8460-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.409015493s Jun 27 17:50:55.651: INFO: Pod "downwardapi-volume-17ea8460-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.41417154s Jun 27 17:50:57.656: INFO: Pod "downwardapi-volume-17ea8460-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.419176356s Jun 27 17:50:59.661: INFO: Pod "downwardapi-volume-17ea8460-9904-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.424675443s STEP: Saw pod success Jun 27 17:50:59.661: INFO: Pod "downwardapi-volume-17ea8460-9904-11e9-8fa9-0242ac110005" satisfied condition "success or failure" Jun 27 17:50:59.666: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-17ea8460-9904-11e9-8fa9-0242ac110005 container client-container: STEP: delete the pod Jun 27 17:50:59.730: INFO: Waiting for pod downwardapi-volume-17ea8460-9904-11e9-8fa9-0242ac110005 to disappear Jun 27 17:50:59.738: INFO: Pod downwardapi-volume-17ea8460-9904-11e9-8fa9-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:50:59.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lsqzv" for this suite. Jun 27 17:51:05.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:51:05.849: INFO: namespace: e2e-tests-projected-lsqzv, resource: bindings, ignored listing per whitelist Jun 27 17:51:05.907: INFO: namespace e2e-tests-projected-lsqzv deletion completed in 6.163974441s • [SLOW TEST:19.291 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:51:05.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 27 17:51:12.620: INFO: Successfully updated pod "labelsupdate23306adf-9904-11e9-8fa9-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:51:14.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c6rbr" for this suite. Jun 27 17:51:36.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:51:36.830: INFO: namespace: e2e-tests-downward-api-c6rbr, resource: bindings, ignored listing per whitelist Jun 27 17:51:36.891: INFO: namespace e2e-tests-downward-api-c6rbr deletion completed in 22.15120864s • [SLOW TEST:30.984 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:51:36.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-359b3917-9904-11e9-8fa9-0242ac110005 STEP: Creating a pod to test consume configMaps Jun 27 17:51:36.992: INFO: Waiting up to 5m0s for pod "pod-configmaps-359bcb07-9904-11e9-8fa9-0242ac110005" in namespace "e2e-tests-configmap-qt94h" to be "success or failure" Jun 27 17:51:36.998: INFO: Pod "pod-configmaps-359bcb07-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.528186ms Jun 27 17:51:39.031: INFO: Pod "pod-configmaps-359bcb07-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038976196s Jun 27 17:51:41.037: INFO: Pod "pod-configmaps-359bcb07-9904-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045313434s STEP: Saw pod success Jun 27 17:51:41.037: INFO: Pod "pod-configmaps-359bcb07-9904-11e9-8fa9-0242ac110005" satisfied condition "success or failure" Jun 27 17:51:41.043: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-359bcb07-9904-11e9-8fa9-0242ac110005 container configmap-volume-test: STEP: delete the pod Jun 27 17:51:41.107: INFO: Waiting for pod pod-configmaps-359bcb07-9904-11e9-8fa9-0242ac110005 to disappear Jun 27 17:51:41.112: INFO: Pod pod-configmaps-359bcb07-9904-11e9-8fa9-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:51:41.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qt94h" for this suite. Jun 27 17:51:47.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:51:47.219: INFO: namespace: e2e-tests-configmap-qt94h, resource: bindings, ignored listing per whitelist Jun 27 17:51:47.246: INFO: namespace e2e-tests-configmap-qt94h deletion completed in 6.129207405s • [SLOW TEST:10.355 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:51:47.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jun 27 17:51:47.400: INFO: Waiting up to 5m0s for pod "pod-3bd02163-9904-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-t96bm" to be "success or failure" Jun 27 17:51:47.414: INFO: Pod "pod-3bd02163-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.275408ms Jun 27 17:51:49.419: INFO: Pod "pod-3bd02163-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019098183s Jun 27 17:51:51.426: INFO: Pod "pod-3bd02163-9904-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02588898s STEP: Saw pod success Jun 27 17:51:51.426: INFO: Pod "pod-3bd02163-9904-11e9-8fa9-0242ac110005" satisfied condition "success or failure" Jun 27 17:51:51.432: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-3bd02163-9904-11e9-8fa9-0242ac110005 container test-container: STEP: delete the pod Jun 27 17:51:51.467: INFO: Waiting for pod pod-3bd02163-9904-11e9-8fa9-0242ac110005 to disappear Jun 27 17:51:51.473: INFO: Pod pod-3bd02163-9904-11e9-8fa9-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:51:51.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-t96bm" for this suite. Jun 27 17:51:57.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:51:57.576: INFO: namespace: e2e-tests-emptydir-t96bm, resource: bindings, ignored listing per whitelist Jun 27 17:51:57.660: INFO: namespace e2e-tests-emptydir-t96bm deletion completed in 6.181311566s • [SLOW TEST:10.414 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:51:57.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:52:01.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-c4rlz" for this suite. Jun 27 17:52:07.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:52:08.005: INFO: namespace: e2e-tests-emptydir-wrapper-c4rlz, resource: bindings, ignored listing per whitelist Jun 27 17:52:08.128: INFO: namespace e2e-tests-emptydir-wrapper-c4rlz deletion completed in 6.157992286s • [SLOW TEST:10.467 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:52:08.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jun 27 17:52:08.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-d9w49' Jun 27 17:52:10.293: INFO: stderr: "" Jun 27 17:52:10.293: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jun 27 17:52:11.301: INFO: Selector matched 1 pods for map[app:redis] Jun 27 17:52:11.301: INFO: Found 0 / 1 Jun 27 17:52:12.298: INFO: Selector matched 1 pods for map[app:redis] Jun 27 17:52:12.298: INFO: Found 0 / 1 Jun 27 17:52:13.297: INFO: Selector matched 1 pods for map[app:redis] Jun 27 17:52:13.297: INFO: Found 1 / 1 Jun 27 17:52:13.297: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 27 17:52:13.300: INFO: Selector matched 1 pods for map[app:redis] Jun 27 17:52:13.300: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jun 27 17:52:13.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6t6wj redis-master --namespace=e2e-tests-kubectl-d9w49' Jun 27 17:52:13.391: INFO: stderr: "" Jun 27 17:52:13.391: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Jun 17:52:12.688 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Jun 17:52:12.688 # Server started, Redis version 3.2.12\n1:M 27 Jun 17:52:12.688 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Jun 17:52:12.688 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jun 27 17:52:13.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6t6wj redis-master --namespace=e2e-tests-kubectl-d9w49 --tail=1' Jun 27 17:52:13.473: INFO: stderr: "" Jun 27 17:52:13.473: INFO: stdout: "1:M 27 Jun 17:52:12.688 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jun 27 17:52:13.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6t6wj redis-master --namespace=e2e-tests-kubectl-d9w49 --limit-bytes=1' Jun 27 17:52:13.545: INFO: stderr: "" Jun 27 17:52:13.545: INFO: stdout: " " STEP: exposing timestamps Jun 27 17:52:13.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6t6wj redis-master --namespace=e2e-tests-kubectl-d9w49 --tail=1 --timestamps' Jun 27 17:52:13.623: INFO: stderr: "" Jun 27 17:52:13.623: INFO: stdout: "2019-06-27T17:52:12.688725793Z 1:M 27 Jun 17:52:12.688 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jun 27 17:52:16.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6t6wj redis-master --namespace=e2e-tests-kubectl-d9w49 --since=1s' Jun 27 17:52:16.246: INFO: stderr: "" Jun 27 17:52:16.246: INFO: stdout: "" Jun 27 17:52:16.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6t6wj redis-master --namespace=e2e-tests-kubectl-d9w49 --since=24h' Jun 27 17:52:16.358: INFO: stderr: "" Jun 27 17:52:16.358: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Jun 17:52:12.688 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Jun 17:52:12.688 # Server started, Redis version 3.2.12\n1:M 27 Jun 17:52:12.688 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Jun 17:52:12.688 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jun 27 17:52:16.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-d9w49' Jun 27 17:52:16.457: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 27 17:52:16.457: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jun 27 17:52:16.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-d9w49' Jun 27 17:52:16.545: INFO: stderr: "No resources found.\n" Jun 27 17:52:16.545: INFO: stdout: "" Jun 27 17:52:16.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-d9w49 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 27 17:52:16.618: INFO: stderr: "" Jun 27 17:52:16.618: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:52:16.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d9w49" for this suite. Jun 27 17:52:38.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:52:38.816: INFO: namespace: e2e-tests-kubectl-d9w49, resource: bindings, ignored listing per whitelist Jun 27 17:52:38.848: INFO: namespace e2e-tests-kubectl-d9w49 deletion completed in 22.227300761s • [SLOW TEST:30.720 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:52:38.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-5a972df0-9904-11e9-8fa9-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-5a972e4d-9904-11e9-8fa9-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5a972df0-9904-11e9-8fa9-0242ac110005 STEP: Updating configmap cm-test-opt-upd-5a972e4d-9904-11e9-8fa9-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-5a972e6e-9904-11e9-8fa9-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:52:47.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lw4dp" for this suite. Jun 27 17:53:09.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:53:09.231: INFO: namespace: e2e-tests-configmap-lw4dp, resource: bindings, ignored listing per whitelist Jun 27 17:53:09.309: INFO: namespace e2e-tests-configmap-lw4dp deletion completed in 22.102995737s • [SLOW TEST:30.460 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:53:09.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-mvc4l [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jun 27 17:53:09.405: INFO: Found 0 stateful pods, waiting for 3 Jun 27 17:53:19.413: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 27 17:53:19.413: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 27 17:53:19.413: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Jun 27 17:53:29.413: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 27 17:53:29.413: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 27 17:53:29.413: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 27 17:53:29.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mvc4l ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 27 17:53:29.785: INFO: stderr: "" Jun 27 17:53:29.785: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 27 17:53:29.785: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 27 17:53:39.825: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 27 17:53:49.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mvc4l ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 27 17:53:50.184: INFO: stderr: "" Jun 27 17:53:50.184: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 27 17:53:50.184: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 27 17:54:00.200: INFO: Waiting for StatefulSet e2e-tests-statefulset-mvc4l/ss2 to complete update Jun 27 17:54:00.200: INFO: Waiting for Pod e2e-tests-statefulset-mvc4l/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 Jun 27 17:54:00.200: INFO: Waiting for Pod e2e-tests-statefulset-mvc4l/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666 Jun 27 17:54:00.200: INFO: Waiting for Pod e2e-tests-statefulset-mvc4l/ss2-2 to have revision ss2-c79899b9 update revision ss2-787997d666 Jun 27 17:54:10.210: INFO: Waiting for StatefulSet e2e-tests-statefulset-mvc4l/ss2 to complete update Jun 27 17:54:10.210: INFO: Waiting for Pod e2e-tests-statefulset-mvc4l/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 Jun 27 17:54:10.210: INFO: Waiting for Pod e2e-tests-statefulset-mvc4l/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666 Jun 27 17:54:20.209: INFO: Waiting for StatefulSet e2e-tests-statefulset-mvc4l/ss2 to complete update STEP: Rolling back to a previous revision Jun 27 17:54:30.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mvc4l ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 27 17:54:30.544: INFO: stderr: "" Jun 27 17:54:30.544: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 27 17:54:30.544: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 27 17:54:40.586: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 27 17:54:50.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mvc4l ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 27 17:54:50.945: INFO: stderr: "" Jun 27 17:54:50.945: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 27 17:54:50.945: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 27 17:55:10.983: INFO: Waiting for StatefulSet e2e-tests-statefulset-mvc4l/ss2 to complete update Jun 27 17:55:10.983: INFO: Waiting for Pod e2e-tests-statefulset-mvc4l/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 27 17:55:20.992: INFO: Deleting all statefulset in ns e2e-tests-statefulset-mvc4l Jun 27 17:55:20.995: INFO: Scaling statefulset ss2 to 0 Jun 27 17:55:51.027: INFO: Waiting for statefulset status.replicas updated to 0 Jun 27 17:55:51.030: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:55:51.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-mvc4l" for this suite. Jun 27 17:55:59.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:55:59.225: INFO: namespace: e2e-tests-statefulset-mvc4l, resource: bindings, ignored listing per whitelist Jun 27 17:55:59.243: INFO: namespace e2e-tests-statefulset-mvc4l deletion completed in 8.147537677s • [SLOW TEST:169.934 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:55:59.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 27 17:55:59.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2007b89-9904-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-s5fl9" to be "success or failure" Jun 27 17:55:59.545: INFO: Pod "downwardapi-volume-d2007b89-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.049542ms Jun 27 17:56:01.550: INFO: Pod "downwardapi-volume-d2007b89-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050067676s Jun 27 17:56:03.556: INFO: Pod "downwardapi-volume-d2007b89-9904-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055567318s Jun 27 17:56:05.560: INFO: Pod "downwardapi-volume-d2007b89-9904-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059414894s STEP: Saw pod success Jun 27 17:56:05.560: INFO: Pod "downwardapi-volume-d2007b89-9904-11e9-8fa9-0242ac110005" satisfied condition "success or failure" Jun 27 17:56:05.563: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-d2007b89-9904-11e9-8fa9-0242ac110005 container client-container: STEP: delete the pod Jun 27 17:56:05.779: INFO: Waiting for pod downwardapi-volume-d2007b89-9904-11e9-8fa9-0242ac110005 to disappear Jun 27 17:56:05.796: INFO: Pod downwardapi-volume-d2007b89-9904-11e9-8fa9-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:56:05.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-s5fl9" for this suite. Jun 27 17:56:11.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:56:11.935: INFO: namespace: e2e-tests-downward-api-s5fl9, resource: bindings, ignored listing per whitelist Jun 27 17:56:11.984: INFO: namespace e2e-tests-downward-api-s5fl9 deletion completed in 6.17504415s • [SLOW TEST:12.740 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:56:11.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 27 17:56:23.612: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:56:24.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-dtqwl" for this suite. Jun 27 17:56:51.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:56:51.181: INFO: namespace: e2e-tests-replicaset-dtqwl, resource: bindings, ignored listing per whitelist Jun 27 17:56:51.181: INFO: namespace e2e-tests-replicaset-dtqwl deletion completed in 26.356125462s • [SLOW TEST:39.196 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:56:51.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 27 17:56:51.388: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:56:57.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-mwttd" for this suite. Jun 27 17:57:49.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:57:49.830: INFO: namespace: e2e-tests-pods-mwttd, resource: bindings, ignored listing per whitelist Jun 27 17:57:49.907: INFO: namespace e2e-tests-pods-mwttd deletion completed in 52.216197021s • [SLOW TEST:58.727 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:57:49.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0627 17:58:00.940528 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 27 17:58:00.940: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:58:00.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-7j78c" for this suite. Jun 27 17:58:09.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:58:09.081: INFO: namespace: e2e-tests-gc-7j78c, resource: bindings, ignored listing per whitelist Jun 27 17:58:09.099: INFO: namespace e2e-tests-gc-7j78c deletion completed in 8.154643069s • [SLOW TEST:19.192 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:58:09.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 27 17:58:09.294: INFO: Number of nodes with available pods: 0 Jun 27 17:58:09.294: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jun 27 17:58:10.300: INFO: Number of nodes with available pods: 0 Jun 27 17:58:10.300: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jun 27 17:58:11.334: INFO: Number of nodes with available pods: 0 Jun 27 17:58:11.334: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jun 27 17:58:12.305: INFO: Number of nodes with available pods: 1 Jun 27 17:58:12.305: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 27 17:58:12.361: INFO: Number of nodes with available pods: 0 Jun 27 17:58:12.361: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jun 27 17:58:13.444: INFO: Number of nodes with available pods: 0 Jun 27 17:58:13.444: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jun 27 17:58:14.367: INFO: Number of nodes with available pods: 0 Jun 27 17:58:14.367: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jun 27 17:58:15.416: INFO: Number of nodes with available pods: 0 Jun 27 17:58:15.416: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jun 27 17:58:16.372: INFO: Number of nodes with available pods: 0 Jun 27 17:58:16.372: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod Jun 27 17:58:17.371: INFO: Number of nodes with available pods: 1 Jun 27 17:58:17.371: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-j6sm4, will wait for the garbage collector to delete the pods Jun 27 17:58:17.508: INFO: Deleting DaemonSet.extensions daemon-set took: 11.951567ms Jun 27 17:58:17.708: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.228727ms Jun 27 17:58:25.811: INFO: Number of nodes with available pods: 0 Jun 27 17:58:25.811: INFO: Number of running nodes: 0, number of available pods: 0 Jun 27 17:58:25.815: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-j6sm4/daemonsets","resourceVersion":"1368280"},"items":null} Jun 27 17:58:25.819: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-j6sm4/pods","resourceVersion":"1368280"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:58:25.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-j6sm4" for this suite. Jun 27 17:58:31.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:58:31.961: INFO: namespace: e2e-tests-daemonsets-j6sm4, resource: bindings, ignored listing per whitelist Jun 27 17:58:32.132: INFO: namespace e2e-tests-daemonsets-j6sm4 deletion completed in 6.301341391s • [SLOW TEST:23.032 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:58:32.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jun 27 17:58:32.271: INFO: Waiting up to 5m0s for pod "client-containers-2d23090e-9905-11e9-8fa9-0242ac110005" in namespace "e2e-tests-containers-lb4mc" to be "success or failure" Jun 27 17:58:32.386: INFO: Pod "client-containers-2d23090e-9905-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 114.84775ms Jun 27 17:58:34.391: INFO: Pod "client-containers-2d23090e-9905-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119481377s Jun 27 17:58:36.395: INFO: Pod "client-containers-2d23090e-9905-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123605712s Jun 27 17:58:38.402: INFO: Pod "client-containers-2d23090e-9905-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130491344s STEP: Saw pod success Jun 27 17:58:38.402: INFO: Pod "client-containers-2d23090e-9905-11e9-8fa9-0242ac110005" satisfied condition "success or failure" Jun 27 17:58:38.406: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod client-containers-2d23090e-9905-11e9-8fa9-0242ac110005 container test-container: STEP: delete the pod Jun 27 17:58:38.519: INFO: Waiting for pod client-containers-2d23090e-9905-11e9-8fa9-0242ac110005 to disappear Jun 27 17:58:38.528: INFO: Pod client-containers-2d23090e-9905-11e9-8fa9-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 17:58:38.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-lb4mc" for this suite. Jun 27 17:58:44.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 17:58:44.600: INFO: namespace: e2e-tests-containers-lb4mc, resource: bindings, ignored listing per whitelist Jun 27 17:58:44.618: INFO: namespace e2e-tests-containers-lb4mc deletion completed in 6.08474805s • [SLOW TEST:12.486 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 17:58:44.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-348e4f36-9905-11e9-8fa9-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-348e4f36-9905-11e9-8fa9-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 18:00:18.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rmt7l" for this suite. Jun 27 18:00:40.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 18:00:40.250: INFO: namespace: e2e-tests-configmap-rmt7l, resource: bindings, ignored listing per whitelist Jun 27 18:00:40.277: INFO: namespace e2e-tests-configmap-rmt7l deletion completed in 22.12431063s • [SLOW TEST:115.658 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 18:00:40.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 27 18:00:40.531: INFO: Waiting up to 5m0s for pod "pod-79950522-9905-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-f8j6t" to be "success or failure" Jun 27 18:00:40.547: INFO: Pod "pod-79950522-9905-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.390542ms Jun 27 18:00:42.551: INFO: Pod "pod-79950522-9905-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019737871s Jun 27 18:00:44.554: INFO: Pod "pod-79950522-9905-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022758469s STEP: Saw pod success Jun 27 18:00:44.554: INFO: Pod "pod-79950522-9905-11e9-8fa9-0242ac110005" satisfied condition "success or failure" Jun 27 18:00:44.555: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-79950522-9905-11e9-8fa9-0242ac110005 container test-container: STEP: delete the pod Jun 27 18:00:44.827: INFO: Waiting for pod pod-79950522-9905-11e9-8fa9-0242ac110005 to disappear Jun 27 18:00:44.853: INFO: Pod pod-79950522-9905-11e9-8fa9-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 18:00:44.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-f8j6t" for this suite. Jun 27 18:00:50.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 18:00:50.992: INFO: namespace: e2e-tests-emptydir-f8j6t, resource: bindings, ignored listing per whitelist Jun 27 18:00:51.042: INFO: namespace e2e-tests-emptydir-f8j6t deletion completed in 6.186938626s • [SLOW TEST:10.765 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 18:00:51.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-m6gfj Jun 27 18:00:55.216: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-m6gfj STEP: checking the pod's current state and verifying that restartCount is present Jun 27 18:00:55.220: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 18:04:57.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-m6gfj" for this suite. Jun 27 18:05:03.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 18:05:03.096: INFO: namespace: e2e-tests-container-probe-m6gfj, resource: bindings, ignored listing per whitelist Jun 27 18:05:03.173: INFO: namespace e2e-tests-container-probe-m6gfj deletion completed in 6.122804792s • [SLOW TEST:252.131 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 18:05:03.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-5q2dg in namespace e2e-tests-proxy-6sv2m I0627 18:05:03.466579 8 runners.go:184] Created replication controller with name: proxy-service-5q2dg, namespace: e2e-tests-proxy-6sv2m, replica count: 1 I0627 18:05:04.517036 8 runners.go:184] proxy-service-5q2dg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0627 18:05:05.517264 8 runners.go:184] proxy-service-5q2dg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0627 18:05:06.517530 8 runners.go:184] proxy-service-5q2dg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0627 18:05:07.517790 8 runners.go:184] proxy-service-5q2dg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0627 18:05:08.518017 8 runners.go:184] proxy-service-5q2dg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0627 18:05:09.518247 8 runners.go:184] proxy-service-5q2dg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0627 18:05:10.518410 8 runners.go:184] proxy-service-5q2dg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0627 18:05:11.518661 8 runners.go:184] proxy-service-5q2dg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0627 18:05:12.518901 8 runners.go:184] proxy-service-5q2dg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0627 18:05:13.519078 8 runners.go:184] proxy-service-5q2dg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0627 18:05:14.519222 8 runners.go:184] proxy-service-5q2dg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0627 18:05:15.519477 8 runners.go:184] proxy-service-5q2dg Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 27 18:05:15.525: INFO: setup took 12.25842116s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 27 18:05:15.549: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6sv2m/pods/proxy-service-5q2dg-t4ntv:160/proxy/: foo (200; 23.808382ms) Jun 27 18:05:15.560: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6sv2m/services/proxy-service-5q2dg:portname1/proxy/: foo (200; 35.038599ms) Jun 27 18:05:15.561: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6sv2m/services/http:proxy-service-5q2dg:portname1/proxy/: foo (200; 36.24551ms) Jun 27 18:05:15.561: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6sv2m/pods/proxy-service-5q2dg-t4ntv:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jun 27 18:05:36.118: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-275d6c1c-9906-11e9-8fa9-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-44djd", SelfLink:"/api/v1/namespaces/e2e-tests-pods-44djd/pods/pod-submit-remove-275d6c1c-9906-11e9-8fa9-0242ac110005", UID:"27608a8c-9906-11e9-a678-fa163e0cec1d", ResourceVersion:"1368967", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63697255532, loc:(*time.Location)(0x7947a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"79831527"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2gz2k", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001f26b80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2gz2k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0012266e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-x6tdbol33slm", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b9ae40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001226720)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001226740)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001226748), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00122674c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697255532, loc:(*time.Location)(0x7947a80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697255534, loc:(*time.Location)(0x7947a80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697255534, loc:(*time.Location)(0x7947a80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697255532, loc:(*time.Location)(0x7947a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"192.168.100.12", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001bd5240), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001bd5260), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://540e36ef5699a1b37c0f4588baada828f28ef30db54b3dd2d41efb6764cf8287"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 27 18:05:41.138: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 18:05:41.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-44djd" for this suite. Jun 27 18:05:47.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 18:05:47.201: INFO: namespace: e2e-tests-pods-44djd, resource: bindings, ignored listing per whitelist Jun 27 18:05:47.236: INFO: namespace e2e-tests-pods-44djd deletion completed in 6.094176022s • [SLOW TEST:15.271 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 18:05:47.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 27 18:05:51.954: INFO: Successfully updated pod "annotationupdate307b897b-9906-11e9-8fa9-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 18:05:53.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-j9fr9" for this suite. Jun 27 18:06:16.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 18:06:16.088: INFO: namespace: e2e-tests-downward-api-j9fr9, resource: bindings, ignored listing per whitelist Jun 27 18:06:16.149: INFO: namespace e2e-tests-downward-api-j9fr9 deletion completed in 22.166016771s • [SLOW TEST:28.912 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 18:06:16.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 27 18:06:16.232: INFO: Waiting up to 5m0s for pod "pod-41ad5baa-9906-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-4fqqf" to be "success or failure" Jun 27 18:06:16.268: INFO: Pod "pod-41ad5baa-9906-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.431004ms Jun 27 18:06:18.332: INFO: Pod "pod-41ad5baa-9906-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099213109s Jun 27 18:06:20.336: INFO: Pod "pod-41ad5baa-9906-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103649971s STEP: Saw pod success Jun 27 18:06:20.336: INFO: Pod "pod-41ad5baa-9906-11e9-8fa9-0242ac110005" satisfied condition "success or failure" Jun 27 18:06:20.343: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-41ad5baa-9906-11e9-8fa9-0242ac110005 container test-container: STEP: delete the pod Jun 27 18:06:20.448: INFO: Waiting for pod pod-41ad5baa-9906-11e9-8fa9-0242ac110005 to disappear Jun 27 18:06:20.461: INFO: Pod pod-41ad5baa-9906-11e9-8fa9-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 27 18:06:20.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4fqqf" for this suite. Jun 27 18:06:26.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 27 18:06:26.532: INFO: namespace: e2e-tests-emptydir-4fqqf, resource: bindings, ignored listing per whitelist Jun 27 18:06:26.624: INFO: namespace e2e-tests-emptydir-4fqqf deletion completed in 6.158881325s • [SLOW TEST:10.476 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 27 18:06:26.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 27 18:06:26.741: INFO: (0) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/:
alternatives.log
apt/
... (200; 7.302392ms)
Jun 27 18:06:26.745: INFO: (1) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.878937ms)
Jun 27 18:06:26.751: INFO: (2) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.152759ms)
Jun 27 18:06:26.755: INFO: (3) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.340621ms)
Jun 27 18:06:26.801: INFO: (4) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 46.250273ms)
Jun 27 18:06:26.812: INFO: (5) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 11.10604ms)
Jun 27 18:06:26.817: INFO: (6) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.813639ms)
Jun 27 18:06:26.824: INFO: (7) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.646342ms)
Jun 27 18:06:26.830: INFO: (8) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.48499ms)
Jun 27 18:06:26.837: INFO: (9) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.75726ms)
Jun 27 18:06:26.845: INFO: (10) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.488004ms)
Jun 27 18:06:26.851: INFO: (11) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.775058ms)
Jun 27 18:06:26.855: INFO: (12) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.844488ms)
Jun 27 18:06:26.860: INFO: (13) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.161766ms)
Jun 27 18:06:26.865: INFO: (14) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.467555ms)
Jun 27 18:06:26.872: INFO: (15) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.543225ms)
Jun 27 18:06:26.877: INFO: (16) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.00343ms)
Jun 27 18:06:26.884: INFO: (17) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.000948ms)
Jun 27 18:06:26.889: INFO: (18) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.088601ms)
Jun 27 18:06:26.893: INFO: (19) /api/v1/nodes/hunter-server-x6tdbol33slm:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.164703ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:06:26.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-lf2nc" for this suite.
Jun 27 18:06:32.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:06:32.957: INFO: namespace: e2e-tests-proxy-lf2nc, resource: bindings, ignored listing per whitelist
Jun 27 18:06:33.012: INFO: namespace e2e-tests-proxy-lf2nc deletion completed in 6.115678333s

• [SLOW TEST:6.387 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:06:33.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-4bbde08d-9906-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 18:06:33.127: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4bbefc42-9906-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-4gd6c" to be "success or failure"
Jun 27 18:06:33.131: INFO: Pod "pod-projected-secrets-4bbefc42-9906-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.568458ms
Jun 27 18:06:35.137: INFO: Pod "pod-projected-secrets-4bbefc42-9906-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010357217s
Jun 27 18:06:37.142: INFO: Pod "pod-projected-secrets-4bbefc42-9906-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015239522s
STEP: Saw pod success
Jun 27 18:06:37.142: INFO: Pod "pod-projected-secrets-4bbefc42-9906-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:06:37.145: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-secrets-4bbefc42-9906-11e9-8fa9-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jun 27 18:06:37.173: INFO: Waiting for pod pod-projected-secrets-4bbefc42-9906-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:06:37.177: INFO: Pod pod-projected-secrets-4bbefc42-9906-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:06:37.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4gd6c" for this suite.
Jun 27 18:06:43.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:06:43.232: INFO: namespace: e2e-tests-projected-4gd6c, resource: bindings, ignored listing per whitelist
Jun 27 18:06:43.270: INFO: namespace e2e-tests-projected-4gd6c deletion completed in 6.089296649s

• [SLOW TEST:10.258 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:06:43.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-8ptl
STEP: Creating a pod to test atomic-volume-subpath
Jun 27 18:06:43.480: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8ptl" in namespace "e2e-tests-subpath-56sss" to be "success or failure"
Jun 27 18:06:43.487: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Pending", Reason="", readiness=false. Elapsed: 7.164999ms
Jun 27 18:06:45.548: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068618995s
Jun 27 18:06:47.554: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074501262s
Jun 27 18:06:49.560: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Running", Reason="", readiness=false. Elapsed: 6.080300032s
Jun 27 18:06:51.589: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Running", Reason="", readiness=false. Elapsed: 8.109518516s
Jun 27 18:06:53.594: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Running", Reason="", readiness=false. Elapsed: 10.114385428s
Jun 27 18:06:55.600: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Running", Reason="", readiness=false. Elapsed: 12.120622328s
Jun 27 18:06:57.604: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Running", Reason="", readiness=false. Elapsed: 14.124748607s
Jun 27 18:06:59.625: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Running", Reason="", readiness=false. Elapsed: 16.145423916s
Jun 27 18:07:01.655: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Running", Reason="", readiness=false. Elapsed: 18.17560996s
Jun 27 18:07:03.695: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Running", Reason="", readiness=false. Elapsed: 20.215153378s
Jun 27 18:07:05.702: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Running", Reason="", readiness=false. Elapsed: 22.22181646s
Jun 27 18:07:07.766: INFO: Pod "pod-subpath-test-projected-8ptl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.285919572s
STEP: Saw pod success
Jun 27 18:07:07.766: INFO: Pod "pod-subpath-test-projected-8ptl" satisfied condition "success or failure"
Jun 27 18:07:07.773: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-subpath-test-projected-8ptl container test-container-subpath-projected-8ptl: 
STEP: delete the pod
Jun 27 18:07:07.824: INFO: Waiting for pod pod-subpath-test-projected-8ptl to disappear
Jun 27 18:07:07.830: INFO: Pod pod-subpath-test-projected-8ptl no longer exists
STEP: Deleting pod pod-subpath-test-projected-8ptl
Jun 27 18:07:07.830: INFO: Deleting pod "pod-subpath-test-projected-8ptl" in namespace "e2e-tests-subpath-56sss"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:07:07.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-56sss" for this suite.
Jun 27 18:07:13.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:07:13.903: INFO: namespace: e2e-tests-subpath-56sss, resource: bindings, ignored listing per whitelist
Jun 27 18:07:13.944: INFO: namespace e2e-tests-subpath-56sss deletion completed in 6.109618665s

• [SLOW TEST:30.674 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:07:13.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:07:14.114: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jun 27 18:07:14.122: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4npml/daemonsets","resourceVersion":"1369234"},"items":null}

Jun 27 18:07:14.124: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4npml/pods","resourceVersion":"1369234"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:07:14.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-4npml" for this suite.
Jun 27 18:07:20.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:07:20.298: INFO: namespace: e2e-tests-daemonsets-4npml, resource: bindings, ignored listing per whitelist
Jun 27 18:07:20.298: INFO: namespace e2e-tests-daemonsets-4npml deletion completed in 6.168208692s

S [SKIPPING] [6.354 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jun 27 18:07:14.114: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:07:20.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:07:20.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:07:24.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-snmnm" for this suite.
Jun 27 18:08:06.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:08:06.695: INFO: namespace: e2e-tests-pods-snmnm, resource: bindings, ignored listing per whitelist
Jun 27 18:08:06.753: INFO: namespace e2e-tests-pods-snmnm deletion completed in 42.205093448s

• [SLOW TEST:46.455 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:08:06.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-839d3c16-9906-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 18:08:06.979: INFO: Waiting up to 5m0s for pod "pod-configmaps-83a0dac3-9906-11e9-8fa9-0242ac110005" in namespace "e2e-tests-configmap-bdkgj" to be "success or failure"
Jun 27 18:08:07.235: INFO: Pod "pod-configmaps-83a0dac3-9906-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 255.470043ms
Jun 27 18:08:09.877: INFO: Pod "pod-configmaps-83a0dac3-9906-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.897429062s
Jun 27 18:08:11.884: INFO: Pod "pod-configmaps-83a0dac3-9906-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.904690633s
Jun 27 18:08:13.887: INFO: Pod "pod-configmaps-83a0dac3-9906-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.907670662s
STEP: Saw pod success
Jun 27 18:08:13.887: INFO: Pod "pod-configmaps-83a0dac3-9906-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:08:13.889: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-83a0dac3-9906-11e9-8fa9-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jun 27 18:08:13.920: INFO: Waiting for pod pod-configmaps-83a0dac3-9906-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:08:14.050: INFO: Pod pod-configmaps-83a0dac3-9906-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:08:14.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bdkgj" for this suite.
Jun 27 18:08:20.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:08:20.158: INFO: namespace: e2e-tests-configmap-bdkgj, resource: bindings, ignored listing per whitelist
Jun 27 18:08:20.197: INFO: namespace e2e-tests-configmap-bdkgj deletion completed in 6.143964994s

• [SLOW TEST:13.444 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:08:20.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-hzvx
STEP: Creating a pod to test atomic-volume-subpath
Jun 27 18:08:20.468: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hzvx" in namespace "e2e-tests-subpath-rwqlr" to be "success or failure"
Jun 27 18:08:20.474: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076207ms
Jun 27 18:08:22.522: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053811817s
Jun 27 18:08:24.529: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061351677s
Jun 27 18:08:26.534: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066431996s
Jun 27 18:08:28.540: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Running", Reason="", readiness=false. Elapsed: 8.072109408s
Jun 27 18:08:30.543: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Running", Reason="", readiness=false. Elapsed: 10.075434837s
Jun 27 18:08:32.549: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Running", Reason="", readiness=false. Elapsed: 12.081264174s
Jun 27 18:08:34.555: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Running", Reason="", readiness=false. Elapsed: 14.087329469s
Jun 27 18:08:36.561: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Running", Reason="", readiness=false. Elapsed: 16.093101133s
Jun 27 18:08:38.566: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Running", Reason="", readiness=false. Elapsed: 18.098271439s
Jun 27 18:08:40.571: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Running", Reason="", readiness=false. Elapsed: 20.103428636s
Jun 27 18:08:42.578: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Running", Reason="", readiness=false. Elapsed: 22.110390771s
Jun 27 18:08:44.739: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Running", Reason="", readiness=false. Elapsed: 24.271590847s
Jun 27 18:08:46.745: INFO: Pod "pod-subpath-test-configmap-hzvx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.276969825s
STEP: Saw pod success
Jun 27 18:08:46.745: INFO: Pod "pod-subpath-test-configmap-hzvx" satisfied condition "success or failure"
Jun 27 18:08:46.749: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-subpath-test-configmap-hzvx container test-container-subpath-configmap-hzvx: 
STEP: delete the pod
Jun 27 18:08:46.809: INFO: Waiting for pod pod-subpath-test-configmap-hzvx to disappear
Jun 27 18:08:46.813: INFO: Pod pod-subpath-test-configmap-hzvx no longer exists
STEP: Deleting pod pod-subpath-test-configmap-hzvx
Jun 27 18:08:46.813: INFO: Deleting pod "pod-subpath-test-configmap-hzvx" in namespace "e2e-tests-subpath-rwqlr"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:08:46.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-rwqlr" for this suite.
Jun 27 18:08:52.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:08:52.933: INFO: namespace: e2e-tests-subpath-rwqlr, resource: bindings, ignored listing per whitelist
Jun 27 18:08:52.984: INFO: namespace e2e-tests-subpath-rwqlr deletion completed in 6.163434577s

• [SLOW TEST:32.787 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:08:52.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-9f323dd9-9906-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 18:08:53.141: INFO: Waiting up to 5m0s for pod "pod-secrets-9f33688d-9906-11e9-8fa9-0242ac110005" in namespace "e2e-tests-secrets-5fz8k" to be "success or failure"
Jun 27 18:08:53.150: INFO: Pod "pod-secrets-9f33688d-9906-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.516884ms
Jun 27 18:08:55.154: INFO: Pod "pod-secrets-9f33688d-9906-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013816334s
Jun 27 18:08:57.159: INFO: Pod "pod-secrets-9f33688d-9906-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018456149s
STEP: Saw pod success
Jun 27 18:08:57.159: INFO: Pod "pod-secrets-9f33688d-9906-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:08:57.163: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-9f33688d-9906-11e9-8fa9-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jun 27 18:08:57.214: INFO: Waiting for pod pod-secrets-9f33688d-9906-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:08:57.272: INFO: Pod pod-secrets-9f33688d-9906-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:08:57.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5fz8k" for this suite.
Jun 27 18:09:03.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:09:03.353: INFO: namespace: e2e-tests-secrets-5fz8k, resource: bindings, ignored listing per whitelist
Jun 27 18:09:03.381: INFO: namespace e2e-tests-secrets-5fz8k deletion completed in 6.101318095s

• [SLOW TEST:10.397 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:09:03.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jun 27 18:09:03.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:05.961: INFO: stderr: ""
Jun 27 18:09:05.961: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 27 18:09:05.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:06.102: INFO: stderr: ""
Jun 27 18:09:06.102: INFO: stdout: "update-demo-nautilus-dhnr8 update-demo-nautilus-m7jlz "
Jun 27 18:09:06.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dhnr8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:06.252: INFO: stderr: ""
Jun 27 18:09:06.252: INFO: stdout: ""
Jun 27 18:09:06.252: INFO: update-demo-nautilus-dhnr8 is created but not running
Jun 27 18:09:11.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:11.391: INFO: stderr: ""
Jun 27 18:09:11.391: INFO: stdout: "update-demo-nautilus-dhnr8 update-demo-nautilus-m7jlz "
Jun 27 18:09:11.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dhnr8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:11.482: INFO: stderr: ""
Jun 27 18:09:11.482: INFO: stdout: "true"
Jun 27 18:09:11.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dhnr8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:11.568: INFO: stderr: ""
Jun 27 18:09:11.568: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 27 18:09:11.568: INFO: validating pod update-demo-nautilus-dhnr8
Jun 27 18:09:11.574: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 27 18:09:11.574: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 27 18:09:11.574: INFO: update-demo-nautilus-dhnr8 is verified up and running
Jun 27 18:09:11.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m7jlz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:11.660: INFO: stderr: ""
Jun 27 18:09:11.660: INFO: stdout: "true"
Jun 27 18:09:11.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m7jlz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:11.733: INFO: stderr: ""
Jun 27 18:09:11.733: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 27 18:09:11.733: INFO: validating pod update-demo-nautilus-m7jlz
Jun 27 18:09:11.747: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 27 18:09:11.747: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 27 18:09:11.747: INFO: update-demo-nautilus-m7jlz is verified up and running
STEP: rolling-update to new replication controller
Jun 27 18:09:11.748: INFO: scanned /root for discovery docs: 
Jun 27 18:09:11.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:34.292: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jun 27 18:09:34.293: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 27 18:09:34.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:34.429: INFO: stderr: ""
Jun 27 18:09:34.429: INFO: stdout: "update-demo-kitten-7qwg9 update-demo-kitten-jhjmk "
Jun 27 18:09:34.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7qwg9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:34.507: INFO: stderr: ""
Jun 27 18:09:34.508: INFO: stdout: "true"
Jun 27 18:09:34.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7qwg9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:34.597: INFO: stderr: ""
Jun 27 18:09:34.597: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jun 27 18:09:34.597: INFO: validating pod update-demo-kitten-7qwg9
Jun 27 18:09:34.604: INFO: got data: {
  "image": "kitten.jpg"
}

Jun 27 18:09:34.604: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jun 27 18:09:34.604: INFO: update-demo-kitten-7qwg9 is verified up and running
Jun 27 18:09:34.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jhjmk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:34.689: INFO: stderr: ""
Jun 27 18:09:34.689: INFO: stdout: "true"
Jun 27 18:09:34.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jhjmk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nbwbt'
Jun 27 18:09:34.767: INFO: stderr: ""
Jun 27 18:09:34.767: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jun 27 18:09:34.767: INFO: validating pod update-demo-kitten-jhjmk
Jun 27 18:09:34.771: INFO: got data: {
  "image": "kitten.jpg"
}

Jun 27 18:09:34.771: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jun 27 18:09:34.771: INFO: update-demo-kitten-jhjmk is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:09:34.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nbwbt" for this suite.
Jun 27 18:09:58.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:09:58.882: INFO: namespace: e2e-tests-kubectl-nbwbt, resource: bindings, ignored listing per whitelist
Jun 27 18:09:58.927: INFO: namespace e2e-tests-kubectl-nbwbt deletion completed in 24.153889852s

• [SLOW TEST:55.546 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:09:58.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:09:59.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-lnhgm" for this suite.
Jun 27 18:10:05.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:10:05.355: INFO: namespace: e2e-tests-kubelet-test-lnhgm, resource: bindings, ignored listing per whitelist
Jun 27 18:10:05.364: INFO: namespace e2e-tests-kubelet-test-lnhgm deletion completed in 6.117236697s

• [SLOW TEST:6.437 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:10:05.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:10:05.466: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jun 27 18:10:05.491: INFO: Pod name sample-pod: Found 0 pods out of 1
Jun 27 18:10:10.496: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jun 27 18:10:10.496: INFO: Creating deployment "test-rolling-update-deployment"
Jun 27 18:10:10.501: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jun 27 18:10:10.517: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jun 27 18:10:12.745: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jun 27 18:10:12.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697255810, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697255810, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697255810, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697255810, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-68b55d7bc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 27 18:10:14.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697255810, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697255810, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697255814, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697255810, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-68b55d7bc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 27 18:10:16.755: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jun 27 18:10:16.771: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-8d6h9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8d6h9/deployments/test-rolling-update-deployment,UID:cd5296a4-9906-11e9-a678-fa163e0cec1d,ResourceVersion:1369760,Generation:1,CreationTimestamp:2019-06-27 18:10:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-06-27 18:10:10 +0000 UTC 2019-06-27 18:10:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-06-27 18:10:14 +0000 UTC 2019-06-27 18:10:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-68b55d7bc6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jun 27 18:10:16.776: INFO: New ReplicaSet "test-rolling-update-deployment-68b55d7bc6" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-68b55d7bc6,GenerateName:,Namespace:e2e-tests-deployment-8d6h9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8d6h9/replicasets/test-rolling-update-deployment-68b55d7bc6,UID:cd57583b-9906-11e9-a678-fa163e0cec1d,ResourceVersion:1369751,Generation:1,CreationTimestamp:2019-06-27 18:10:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cd5296a4-9906-11e9-a678-fa163e0cec1d 0xc002046587 0xc002046588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jun 27 18:10:16.776: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jun 27 18:10:16.776: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-8d6h9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8d6h9/replicasets/test-rolling-update-controller,UID:ca52f95f-9906-11e9-a678-fa163e0cec1d,ResourceVersion:1369759,Generation:2,CreationTimestamp:2019-06-27 18:10:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cd5296a4-9906-11e9-a678-fa163e0cec1d 0xc0020464c7 0xc0020464c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jun 27 18:10:16.781: INFO: Pod "test-rolling-update-deployment-68b55d7bc6-ldj9s" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-68b55d7bc6-ldj9s,GenerateName:test-rolling-update-deployment-68b55d7bc6-,Namespace:e2e-tests-deployment-8d6h9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8d6h9/pods/test-rolling-update-deployment-68b55d7bc6-ldj9s,UID:cd60f820-9906-11e9-a678-fa163e0cec1d,ResourceVersion:1369750,Generation:0,CreationTimestamp:2019-06-27 18:10:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-68b55d7bc6 cd57583b-9906-11e9-a678-fa163e0cec1d 0xc001c79f37 0xc001c79f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-twjvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-twjvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-twjvg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c79fa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c79fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:10:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:10:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:10:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:10:10 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.5,StartTime:2019-06-27 18:10:10 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-06-27 18:10:13 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a38ead602af1dbcc460fb97eace9f1c682697e7cdf98f2640973e5407f5cc03d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:10:16.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-8d6h9" for this suite.
Jun 27 18:10:24.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:10:24.926: INFO: namespace: e2e-tests-deployment-8d6h9, resource: bindings, ignored listing per whitelist
Jun 27 18:10:24.952: INFO: namespace e2e-tests-deployment-8d6h9 deletion completed in 8.165253269s

• [SLOW TEST:19.588 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:10:24.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-d65d3bf8-9906-11e9-8fa9-0242ac110005
STEP: Creating secret with name s-test-opt-upd-d65d3c6c-9906-11e9-8fa9-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-d65d3bf8-9906-11e9-8fa9-0242ac110005
STEP: Updating secret s-test-opt-upd-d65d3c6c-9906-11e9-8fa9-0242ac110005
STEP: Creating secret with name s-test-opt-create-d65d3c91-9906-11e9-8fa9-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:11:50.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-p5ksv" for this suite.
Jun 27 18:12:14.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:12:14.895: INFO: namespace: e2e-tests-secrets-p5ksv, resource: bindings, ignored listing per whitelist
Jun 27 18:12:14.940: INFO: namespace e2e-tests-secrets-p5ksv deletion completed in 24.110418344s

• [SLOW TEST:109.988 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:12:14.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jun 27 18:12:15.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jun 27 18:12:15.377: INFO: stderr: ""
Jun 27 18:12:15.377: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:12:15.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-l2p7s" for this suite.
Jun 27 18:12:21.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:12:21.498: INFO: namespace: e2e-tests-kubectl-l2p7s, resource: bindings, ignored listing per whitelist
Jun 27 18:12:21.517: INFO: namespace e2e-tests-kubectl-l2p7s deletion completed in 6.135159283s

• [SLOW TEST:6.577 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:12:21.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jun 27 18:12:21.685: INFO: Waiting up to 5m0s for pod "var-expansion-1b7e88f0-9907-11e9-8fa9-0242ac110005" in namespace "e2e-tests-var-expansion-d2z9l" to be "success or failure"
Jun 27 18:12:21.698: INFO: Pod "var-expansion-1b7e88f0-9907-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.29347ms
Jun 27 18:12:23.755: INFO: Pod "var-expansion-1b7e88f0-9907-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070717745s
Jun 27 18:12:25.761: INFO: Pod "var-expansion-1b7e88f0-9907-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076299761s
STEP: Saw pod success
Jun 27 18:12:25.761: INFO: Pod "var-expansion-1b7e88f0-9907-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:12:25.765: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod var-expansion-1b7e88f0-9907-11e9-8fa9-0242ac110005 container dapi-container: 
STEP: delete the pod
Jun 27 18:12:25.796: INFO: Waiting for pod var-expansion-1b7e88f0-9907-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:12:25.804: INFO: Pod var-expansion-1b7e88f0-9907-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:12:25.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-d2z9l" for this suite.
Jun 27 18:12:31.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:12:31.861: INFO: namespace: e2e-tests-var-expansion-d2z9l, resource: bindings, ignored listing per whitelist
Jun 27 18:12:31.949: INFO: namespace e2e-tests-var-expansion-d2z9l deletion completed in 6.140815349s

• [SLOW TEST:10.431 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:12:31.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:12:36.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-cztqr" for this suite.
Jun 27 18:12:42.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:12:42.148: INFO: namespace: e2e-tests-kubelet-test-cztqr, resource: bindings, ignored listing per whitelist
Jun 27 18:12:42.158: INFO: namespace e2e-tests-kubelet-test-cztqr deletion completed in 6.076359665s

• [SLOW TEST:10.209 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:12:42.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jun 27 18:12:42.259: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jun 27 18:12:42.266: INFO: Waiting for terminating namespaces to be deleted...
Jun 27 18:12:42.268: INFO: 
Logging pods the kubelet thinks is on node hunter-server-x6tdbol33slm before test
Jun 27 18:12:42.344: INFO: etcd-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jun 27 18:12:42.344: INFO: kube-controller-manager-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jun 27 18:12:42.344: INFO: kube-apiserver-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jun 27 18:12:42.344: INFO: weave-net-z4vkv from kube-system started at 2019-06-16 12:55:36 +0000 UTC (2 container statuses recorded)
Jun 27 18:12:42.344: INFO: 	Container weave ready: true, restart count 0
Jun 27 18:12:42.344: INFO: 	Container weave-npc ready: true, restart count 0
Jun 27 18:12:42.344: INFO: coredns-86c58d9df4-zdm4x from kube-system started at 2019-06-16 12:56:01 +0000 UTC (1 container statuses recorded)
Jun 27 18:12:42.344: INFO: 	Container coredns ready: true, restart count 0
Jun 27 18:12:42.344: INFO: kube-scheduler-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jun 27 18:12:42.344: INFO: coredns-86c58d9df4-99n2k from kube-system started at 2019-06-16 12:56:01 +0000 UTC (1 container statuses recorded)
Jun 27 18:12:42.344: INFO: 	Container coredns ready: true, restart count 0
Jun 27 18:12:42.344: INFO: kube-proxy-ww64l from kube-system started at 2019-06-16 12:55:34 +0000 UTC (1 container statuses recorded)
Jun 27 18:12:42.344: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-2a566cf3-9907-11e9-8fa9-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-2a566cf3-9907-11e9-8fa9-0242ac110005 off the node hunter-server-x6tdbol33slm
STEP: verifying the node doesn't have the label kubernetes.io/e2e-2a566cf3-9907-11e9-8fa9-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:12:50.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-8nw9h" for this suite.
Jun 27 18:13:10.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:13:10.963: INFO: namespace: e2e-tests-sched-pred-8nw9h, resource: bindings, ignored listing per whitelist
Jun 27 18:13:10.967: INFO: namespace e2e-tests-sched-pred-8nw9h deletion completed in 20.160228707s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:28.810 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:13:10.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:13:35.166: INFO: Container started at 2019-06-27 18:13:13 +0000 UTC, pod became ready at 2019-06-27 18:13:34 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:13:35.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-5pkch" for this suite.
Jun 27 18:13:57.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:13:57.251: INFO: namespace: e2e-tests-container-probe-5pkch, resource: bindings, ignored listing per whitelist
Jun 27 18:13:57.258: INFO: namespace e2e-tests-container-probe-5pkch deletion completed in 22.090349334s

• [SLOW TEST:46.291 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:13:57.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-v46fz
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-v46fz
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-v46fz
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-v46fz
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-v46fz
Jun 27 18:14:03.598: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-v46fz, name: ss-0, uid: 5662ce4f-9907-11e9-a678-fa163e0cec1d, status phase: Pending. Waiting for statefulset controller to delete.
Jun 27 18:14:05.704: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-v46fz, name: ss-0, uid: 5662ce4f-9907-11e9-a678-fa163e0cec1d, status phase: Failed. Waiting for statefulset controller to delete.
Jun 27 18:14:05.728: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-v46fz, name: ss-0, uid: 5662ce4f-9907-11e9-a678-fa163e0cec1d, status phase: Failed. Waiting for statefulset controller to delete.
Jun 27 18:14:05.758: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-v46fz
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-v46fz
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-v46fz and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jun 27 18:14:09.912: INFO: Deleting all statefulset in ns e2e-tests-statefulset-v46fz
Jun 27 18:14:09.916: INFO: Scaling statefulset ss to 0
Jun 27 18:14:19.940: INFO: Waiting for statefulset status.replicas updated to 0
Jun 27 18:14:19.944: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:14:19.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-v46fz" for this suite.
Jun 27 18:14:25.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:14:26.008: INFO: namespace: e2e-tests-statefulset-v46fz, resource: bindings, ignored listing per whitelist
Jun 27 18:14:26.100: INFO: namespace e2e-tests-statefulset-v46fz deletion completed in 6.133057288s

• [SLOW TEST:28.841 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:14:26.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jun 27 18:14:26.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-lm296'
Jun 27 18:14:26.378: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jun 27 18:14:26.378: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jun 27 18:14:28.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lm296'
Jun 27 18:14:28.541: INFO: stderr: ""
Jun 27 18:14:28.541: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:14:28.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lm296" for this suite.
Jun 27 18:16:22.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:16:22.793: INFO: namespace: e2e-tests-kubectl-lm296, resource: bindings, ignored listing per whitelist
Jun 27 18:16:22.834: INFO: namespace e2e-tests-kubectl-lm296 deletion completed in 1m54.230744238s

• [SLOW TEST:116.734 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:16:22.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jun 27 18:16:22.922: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix988323607/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:16:22.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-d89nw" for this suite.
Jun 27 18:16:29.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:16:29.160: INFO: namespace: e2e-tests-kubectl-d89nw, resource: bindings, ignored listing per whitelist
Jun 27 18:16:29.172: INFO: namespace e2e-tests-kubectl-d89nw deletion completed in 6.173538536s

• [SLOW TEST:6.338 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:16:29.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:16:29.347: INFO: Creating ReplicaSet my-hostname-basic-af206f50-9907-11e9-8fa9-0242ac110005
Jun 27 18:16:29.381: INFO: Pod name my-hostname-basic-af206f50-9907-11e9-8fa9-0242ac110005: Found 0 pods out of 1
Jun 27 18:16:34.388: INFO: Pod name my-hostname-basic-af206f50-9907-11e9-8fa9-0242ac110005: Found 1 pods out of 1
Jun 27 18:16:34.388: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-af206f50-9907-11e9-8fa9-0242ac110005" is running
Jun 27 18:16:34.393: INFO: Pod "my-hostname-basic-af206f50-9907-11e9-8fa9-0242ac110005-7lhvh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-27 18:16:29 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-27 18:16:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-27 18:16:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-27 18:16:29 +0000 UTC Reason: Message:}])
Jun 27 18:16:34.393: INFO: Trying to dial the pod
Jun 27 18:16:39.411: INFO: Controller my-hostname-basic-af206f50-9907-11e9-8fa9-0242ac110005: Got expected result from replica 1 [my-hostname-basic-af206f50-9907-11e9-8fa9-0242ac110005-7lhvh]: "my-hostname-basic-af206f50-9907-11e9-8fa9-0242ac110005-7lhvh", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:16:39.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-spwnd" for this suite.
Jun 27 18:16:45.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:16:45.586: INFO: namespace: e2e-tests-replicaset-spwnd, resource: bindings, ignored listing per whitelist
Jun 27 18:16:45.611: INFO: namespace e2e-tests-replicaset-spwnd deletion completed in 6.195469298s

• [SLOW TEST:16.439 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:16:45.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun 27 18:16:45.863: INFO: Waiting up to 5m0s for pod "pod-b8f21e7d-9907-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-l7m6g" to be "success or failure"
Jun 27 18:16:45.880: INFO: Pod "pod-b8f21e7d-9907-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.822072ms
Jun 27 18:16:47.884: INFO: Pod "pod-b8f21e7d-9907-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020193682s
Jun 27 18:16:49.887: INFO: Pod "pod-b8f21e7d-9907-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023514125s
STEP: Saw pod success
Jun 27 18:16:49.887: INFO: Pod "pod-b8f21e7d-9907-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:16:49.889: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-b8f21e7d-9907-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 18:16:49.976: INFO: Waiting for pod pod-b8f21e7d-9907-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:16:49.980: INFO: Pod pod-b8f21e7d-9907-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:16:49.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-l7m6g" for this suite.
Jun 27 18:16:56.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:16:56.200: INFO: namespace: e2e-tests-emptydir-l7m6g, resource: bindings, ignored listing per whitelist
Jun 27 18:16:56.225: INFO: namespace e2e-tests-emptydir-l7m6g deletion completed in 6.144727358s

• [SLOW TEST:10.614 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:16:56.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jun 27 18:17:00.378: INFO: Pod pod-hostip-bf377cad-9907-11e9-8fa9-0242ac110005 has hostIP: 192.168.100.12
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:17:00.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-t5ghn" for this suite.
Jun 27 18:17:22.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:17:22.508: INFO: namespace: e2e-tests-pods-t5ghn, resource: bindings, ignored listing per whitelist
Jun 27 18:17:22.533: INFO: namespace e2e-tests-pods-t5ghn deletion completed in 22.151570231s

• [SLOW TEST:26.307 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:17:22.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jun 27 18:17:22.654: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:17:32.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-x47k5" for this suite.
Jun 27 18:17:56.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:17:56.720: INFO: namespace: e2e-tests-init-container-x47k5, resource: bindings, ignored listing per whitelist
Jun 27 18:17:56.830: INFO: namespace e2e-tests-init-container-x47k5 deletion completed in 24.145927156s

• [SLOW TEST:34.296 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:17:56.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jun 27 18:17:57.461: INFO: created pod pod-service-account-defaultsa
Jun 27 18:17:57.461: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jun 27 18:17:57.476: INFO: created pod pod-service-account-mountsa
Jun 27 18:17:57.476: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jun 27 18:17:57.494: INFO: created pod pod-service-account-nomountsa
Jun 27 18:17:57.494: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jun 27 18:17:57.526: INFO: created pod pod-service-account-defaultsa-mountspec
Jun 27 18:17:57.526: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jun 27 18:17:57.628: INFO: created pod pod-service-account-mountsa-mountspec
Jun 27 18:17:57.628: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jun 27 18:17:57.687: INFO: created pod pod-service-account-nomountsa-mountspec
Jun 27 18:17:57.687: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jun 27 18:17:57.708: INFO: created pod pod-service-account-defaultsa-nomountspec
Jun 27 18:17:57.708: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jun 27 18:17:57.810: INFO: created pod pod-service-account-mountsa-nomountspec
Jun 27 18:17:57.810: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jun 27 18:17:57.894: INFO: created pod pod-service-account-nomountsa-nomountspec
Jun 27 18:17:57.894: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:17:57.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-8glwp" for this suite.
Jun 27 18:18:40.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:18:40.278: INFO: namespace: e2e-tests-svcaccounts-8glwp, resource: bindings, ignored listing per whitelist
Jun 27 18:18:40.283: INFO: namespace e2e-tests-svcaccounts-8glwp deletion completed in 42.28582897s

• [SLOW TEST:43.453 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:18:40.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:19:06.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-nzdcb" for this suite.
Jun 27 18:19:12.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:19:12.420: INFO: namespace: e2e-tests-container-runtime-nzdcb, resource: bindings, ignored listing per whitelist
Jun 27 18:19:12.423: INFO: namespace e2e-tests-container-runtime-nzdcb deletion completed in 6.182702671s

• [SLOW TEST:32.140 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:19:12.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-106a36be-9908-11e9-8fa9-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-106a3687-9908-11e9-8fa9-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jun 27 18:19:12.601: INFO: Waiting up to 5m0s for pod "projected-volume-106a3603-9908-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-w64k4" to be "success or failure"
Jun 27 18:19:12.715: INFO: Pod "projected-volume-106a3603-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 114.395969ms
Jun 27 18:19:14.719: INFO: Pod "projected-volume-106a3603-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118463178s
Jun 27 18:19:16.725: INFO: Pod "projected-volume-106a3603-9908-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124160916s
STEP: Saw pod success
Jun 27 18:19:16.725: INFO: Pod "projected-volume-106a3603-9908-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:19:16.729: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod projected-volume-106a3603-9908-11e9-8fa9-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jun 27 18:19:16.813: INFO: Waiting for pod projected-volume-106a3603-9908-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:19:16.821: INFO: Pod projected-volume-106a3603-9908-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:19:16.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w64k4" for this suite.
Jun 27 18:19:22.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:19:22.984: INFO: namespace: e2e-tests-projected-w64k4, resource: bindings, ignored listing per whitelist
Jun 27 18:19:22.989: INFO: namespace e2e-tests-projected-w64k4 deletion completed in 6.161616924s

• [SLOW TEST:10.566 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:19:22.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-16b8b2a5-9908-11e9-8fa9-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:19:27.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-nstm5" for this suite.
Jun 27 18:19:49.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:19:49.354: INFO: namespace: e2e-tests-configmap-nstm5, resource: bindings, ignored listing per whitelist
Jun 27 18:19:49.407: INFO: namespace e2e-tests-configmap-nstm5 deletion completed in 22.17515745s

• [SLOW TEST:26.418 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:19:49.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jun 27 18:19:54.143: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2679e72f-9908-11e9-8fa9-0242ac110005"
Jun 27 18:19:54.143: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2679e72f-9908-11e9-8fa9-0242ac110005" in namespace "e2e-tests-pods-jkbsj" to be "terminated due to deadline exceeded"
Jun 27 18:19:54.171: INFO: Pod "pod-update-activedeadlineseconds-2679e72f-9908-11e9-8fa9-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 28.045429ms
Jun 27 18:19:56.185: INFO: Pod "pod-update-activedeadlineseconds-2679e72f-9908-11e9-8fa9-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.041777766s
Jun 27 18:19:56.185: INFO: Pod "pod-update-activedeadlineseconds-2679e72f-9908-11e9-8fa9-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:19:56.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-jkbsj" for this suite.
Jun 27 18:20:02.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:20:02.210: INFO: namespace: e2e-tests-pods-jkbsj, resource: bindings, ignored listing per whitelist
Jun 27 18:20:02.326: INFO: namespace e2e-tests-pods-jkbsj deletion completed in 6.137166535s

• [SLOW TEST:12.919 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:20:02.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-2e2b4663-9908-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 18:20:02.678: INFO: Waiting up to 5m0s for pod "pod-secrets-2e47205d-9908-11e9-8fa9-0242ac110005" in namespace "e2e-tests-secrets-ps88c" to be "success or failure"
Jun 27 18:20:02.686: INFO: Pod "pod-secrets-2e47205d-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.116406ms
Jun 27 18:20:04.692: INFO: Pod "pod-secrets-2e47205d-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013067448s
Jun 27 18:20:06.696: INFO: Pod "pod-secrets-2e47205d-9908-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017713473s
STEP: Saw pod success
Jun 27 18:20:06.696: INFO: Pod "pod-secrets-2e47205d-9908-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:20:06.700: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-2e47205d-9908-11e9-8fa9-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jun 27 18:20:06.732: INFO: Waiting for pod pod-secrets-2e47205d-9908-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:20:06.738: INFO: Pod pod-secrets-2e47205d-9908-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:20:06.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-ps88c" for this suite.
Jun 27 18:20:12.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:20:12.849: INFO: namespace: e2e-tests-secrets-ps88c, resource: bindings, ignored listing per whitelist
Jun 27 18:20:12.897: INFO: namespace e2e-tests-secrets-ps88c deletion completed in 6.155107863s

• [SLOW TEST:10.571 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:20:12.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jun 27 18:20:21.329: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun 27 18:20:21.457: INFO: Pod pod-with-poststart-http-hook still exists
Jun 27 18:20:23.457: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun 27 18:20:23.463: INFO: Pod pod-with-poststart-http-hook still exists
Jun 27 18:20:25.457: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun 27 18:20:25.486: INFO: Pod pod-with-poststart-http-hook still exists
Jun 27 18:20:27.457: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun 27 18:20:27.486: INFO: Pod pod-with-poststart-http-hook still exists
Jun 27 18:20:29.457: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun 27 18:20:29.510: INFO: Pod pod-with-poststart-http-hook still exists
Jun 27 18:20:31.457: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun 27 18:20:31.462: INFO: Pod pod-with-poststart-http-hook still exists
Jun 27 18:20:33.457: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun 27 18:20:33.460: INFO: Pod pod-with-poststart-http-hook still exists
Jun 27 18:20:35.457: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun 27 18:20:35.461: INFO: Pod pod-with-poststart-http-hook still exists
Jun 27 18:20:37.457: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun 27 18:20:37.462: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:20:37.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qpls8" for this suite.
Jun 27 18:21:01.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:21:01.566: INFO: namespace: e2e-tests-container-lifecycle-hook-qpls8, resource: bindings, ignored listing per whitelist
Jun 27 18:21:01.600: INFO: namespace e2e-tests-container-lifecycle-hook-qpls8 deletion completed in 24.127893114s

• [SLOW TEST:48.702 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:21:01.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jun 27 18:21:01.765: INFO: Waiting up to 5m0s for pod "pod-517cad07-9908-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-7zqt4" to be "success or failure"
Jun 27 18:21:01.833: INFO: Pod "pod-517cad07-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 68.223182ms
Jun 27 18:21:03.839: INFO: Pod "pod-517cad07-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073902974s
Jun 27 18:21:05.843: INFO: Pod "pod-517cad07-9908-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077857791s
STEP: Saw pod success
Jun 27 18:21:05.843: INFO: Pod "pod-517cad07-9908-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:21:05.845: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-517cad07-9908-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 18:21:06.074: INFO: Waiting for pod pod-517cad07-9908-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:21:06.079: INFO: Pod pod-517cad07-9908-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:21:06.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7zqt4" for this suite.
Jun 27 18:21:12.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:21:12.157: INFO: namespace: e2e-tests-emptydir-7zqt4, resource: bindings, ignored listing per whitelist
Jun 27 18:21:12.159: INFO: namespace e2e-tests-emptydir-7zqt4 deletion completed in 6.078235929s

• [SLOW TEST:10.559 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:21:12.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 18:21:12.291: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57c57373-9908-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-4msk7" to be "success or failure"
Jun 27 18:21:12.309: INFO: Pod "downwardapi-volume-57c57373-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.644832ms
Jun 27 18:21:14.511: INFO: Pod "downwardapi-volume-57c57373-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220263908s
Jun 27 18:21:16.517: INFO: Pod "downwardapi-volume-57c57373-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.226120911s
Jun 27 18:21:18.522: INFO: Pod "downwardapi-volume-57c57373-9908-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.231280871s
STEP: Saw pod success
Jun 27 18:21:18.522: INFO: Pod "downwardapi-volume-57c57373-9908-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:21:18.526: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-57c57373-9908-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 18:21:18.555: INFO: Waiting for pod downwardapi-volume-57c57373-9908-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:21:18.702: INFO: Pod downwardapi-volume-57c57373-9908-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:21:18.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4msk7" for this suite.
Jun 27 18:21:24.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:21:24.993: INFO: namespace: e2e-tests-projected-4msk7, resource: bindings, ignored listing per whitelist
Jun 27 18:21:25.031: INFO: namespace e2e-tests-projected-4msk7 deletion completed in 6.31329266s

• [SLOW TEST:12.872 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:21:25.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jun 27 18:21:25.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-tckzg run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jun 27 18:21:30.719: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Jun 27 18:21:30.719: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:21:32.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tckzg" for this suite.
Jun 27 18:21:38.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:21:38.947: INFO: namespace: e2e-tests-kubectl-tckzg, resource: bindings, ignored listing per whitelist
Jun 27 18:21:38.969: INFO: namespace e2e-tests-kubectl-tckzg deletion completed in 6.242986234s

• [SLOW TEST:13.938 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:21:38.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun 27 18:21:39.378: INFO: Waiting up to 5m0s for pod "pod-67e943ec-9908-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-x4zdq" to be "success or failure"
Jun 27 18:21:39.383: INFO: Pod "pod-67e943ec-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.64673ms
Jun 27 18:21:41.387: INFO: Pod "pod-67e943ec-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008951649s
Jun 27 18:21:43.392: INFO: Pod "pod-67e943ec-9908-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013866542s
STEP: Saw pod success
Jun 27 18:21:43.392: INFO: Pod "pod-67e943ec-9908-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:21:43.395: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-67e943ec-9908-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 18:21:43.459: INFO: Waiting for pod pod-67e943ec-9908-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:21:43.744: INFO: Pod pod-67e943ec-9908-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:21:43.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-x4zdq" for this suite.
Jun 27 18:21:49.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:21:49.827: INFO: namespace: e2e-tests-emptydir-x4zdq, resource: bindings, ignored listing per whitelist
Jun 27 18:21:49.931: INFO: namespace e2e-tests-emptydir-x4zdq deletion completed in 6.184725347s

• [SLOW TEST:10.962 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:21:49.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun 27 18:21:50.096: INFO: Waiting up to 5m0s for pod "pod-6e455227-9908-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-qzt9h" to be "success or failure"
Jun 27 18:21:50.105: INFO: Pod "pod-6e455227-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.186086ms
Jun 27 18:21:52.108: INFO: Pod "pod-6e455227-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01256752s
Jun 27 18:21:54.117: INFO: Pod "pod-6e455227-9908-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021002526s
STEP: Saw pod success
Jun 27 18:21:54.117: INFO: Pod "pod-6e455227-9908-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:21:54.121: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-6e455227-9908-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 18:21:54.148: INFO: Waiting for pod pod-6e455227-9908-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:21:54.158: INFO: Pod pod-6e455227-9908-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:21:54.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qzt9h" for this suite.
Jun 27 18:22:00.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:22:00.209: INFO: namespace: e2e-tests-emptydir-qzt9h, resource: bindings, ignored listing per whitelist
Jun 27 18:22:00.314: INFO: namespace e2e-tests-emptydir-qzt9h deletion completed in 6.152807143s

• [SLOW TEST:10.383 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:22:00.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:23:00.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-wkwcr" for this suite.
Jun 27 18:23:22.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:23:22.736: INFO: namespace: e2e-tests-container-probe-wkwcr, resource: bindings, ignored listing per whitelist
Jun 27 18:23:22.738: INFO: namespace e2e-tests-container-probe-wkwcr deletion completed in 22.190526374s

• [SLOW TEST:82.424 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:23:22.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jun 27 18:23:22.865: INFO: Waiting up to 5m0s for pod "client-containers-a599175c-9908-11e9-8fa9-0242ac110005" in namespace "e2e-tests-containers-5nxhr" to be "success or failure"
Jun 27 18:23:22.889: INFO: Pod "client-containers-a599175c-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.682662ms
Jun 27 18:23:24.894: INFO: Pod "client-containers-a599175c-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028990523s
Jun 27 18:23:26.899: INFO: Pod "client-containers-a599175c-9908-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034152894s
STEP: Saw pod success
Jun 27 18:23:26.899: INFO: Pod "client-containers-a599175c-9908-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:23:26.903: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod client-containers-a599175c-9908-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 18:23:26.988: INFO: Waiting for pod client-containers-a599175c-9908-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:23:26.998: INFO: Pod client-containers-a599175c-9908-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:23:26.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-5nxhr" for this suite.
Jun 27 18:23:33.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:23:33.104: INFO: namespace: e2e-tests-containers-5nxhr, resource: bindings, ignored listing per whitelist
Jun 27 18:23:33.148: INFO: namespace e2e-tests-containers-5nxhr deletion completed in 6.14340032s

• [SLOW TEST:10.409 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:23:33.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:23:33.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5nxdn" for this suite.
Jun 27 18:23:57.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:23:57.404: INFO: namespace: e2e-tests-pods-5nxdn, resource: bindings, ignored listing per whitelist
Jun 27 18:23:57.428: INFO: namespace e2e-tests-pods-5nxdn deletion completed in 24.184333542s

• [SLOW TEST:24.280 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:23:57.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jun 27 18:24:02.257: INFO: Successfully updated pod "labelsupdateba4dde3e-9908-11e9-8fa9-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:24:04.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gf6gc" for this suite.
Jun 27 18:24:26.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:24:26.507: INFO: namespace: e2e-tests-projected-gf6gc, resource: bindings, ignored listing per whitelist
Jun 27 18:24:26.508: INFO: namespace e2e-tests-projected-gf6gc deletion completed in 22.201677625s

• [SLOW TEST:29.080 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:24:26.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-x5gbc
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 27 18:24:27.192: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jun 27 18:24:51.536: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-x5gbc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 18:24:51.536: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 18:24:51.771: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:24:51.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-x5gbc" for this suite.
Jun 27 18:25:15.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:25:15.890: INFO: namespace: e2e-tests-pod-network-test-x5gbc, resource: bindings, ignored listing per whitelist
Jun 27 18:25:15.914: INFO: namespace e2e-tests-pod-network-test-x5gbc deletion completed in 24.138264366s

• [SLOW TEST:49.405 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:25:15.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jun 27 18:25:16.043: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-zd2m9,SelfLink:/api/v1/namespaces/e2e-tests-watch-zd2m9/configmaps/e2e-watch-test-watch-closed,UID:e90b166f-9908-11e9-a678-fa163e0cec1d,ResourceVersion:1371943,Generation:0,CreationTimestamp:2019-06-27 18:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jun 27 18:25:16.043: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-zd2m9,SelfLink:/api/v1/namespaces/e2e-tests-watch-zd2m9/configmaps/e2e-watch-test-watch-closed,UID:e90b166f-9908-11e9-a678-fa163e0cec1d,ResourceVersion:1371944,Generation:0,CreationTimestamp:2019-06-27 18:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jun 27 18:25:16.094: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-zd2m9,SelfLink:/api/v1/namespaces/e2e-tests-watch-zd2m9/configmaps/e2e-watch-test-watch-closed,UID:e90b166f-9908-11e9-a678-fa163e0cec1d,ResourceVersion:1371945,Generation:0,CreationTimestamp:2019-06-27 18:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jun 27 18:25:16.096: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-zd2m9,SelfLink:/api/v1/namespaces/e2e-tests-watch-zd2m9/configmaps/e2e-watch-test-watch-closed,UID:e90b166f-9908-11e9-a678-fa163e0cec1d,ResourceVersion:1371946,Generation:0,CreationTimestamp:2019-06-27 18:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:25:16.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-zd2m9" for this suite.
Jun 27 18:25:22.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:25:22.140: INFO: namespace: e2e-tests-watch-zd2m9, resource: bindings, ignored listing per whitelist
Jun 27 18:25:22.191: INFO: namespace e2e-tests-watch-zd2m9 deletion completed in 6.08015557s

• [SLOW TEST:6.277 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:25:22.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jun 27 18:25:22.656: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gxfm4,SelfLink:/api/v1/namespaces/e2e-tests-watch-gxfm4/configmaps/e2e-watch-test-label-changed,UID:ecff0074-9908-11e9-a678-fa163e0cec1d,ResourceVersion:1371965,Generation:0,CreationTimestamp:2019-06-27 18:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jun 27 18:25:22.656: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gxfm4,SelfLink:/api/v1/namespaces/e2e-tests-watch-gxfm4/configmaps/e2e-watch-test-label-changed,UID:ecff0074-9908-11e9-a678-fa163e0cec1d,ResourceVersion:1371966,Generation:0,CreationTimestamp:2019-06-27 18:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jun 27 18:25:22.657: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gxfm4,SelfLink:/api/v1/namespaces/e2e-tests-watch-gxfm4/configmaps/e2e-watch-test-label-changed,UID:ecff0074-9908-11e9-a678-fa163e0cec1d,ResourceVersion:1371967,Generation:0,CreationTimestamp:2019-06-27 18:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jun 27 18:25:32.712: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gxfm4,SelfLink:/api/v1/namespaces/e2e-tests-watch-gxfm4/configmaps/e2e-watch-test-label-changed,UID:ecff0074-9908-11e9-a678-fa163e0cec1d,ResourceVersion:1371981,Generation:0,CreationTimestamp:2019-06-27 18:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jun 27 18:25:32.712: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gxfm4,SelfLink:/api/v1/namespaces/e2e-tests-watch-gxfm4/configmaps/e2e-watch-test-label-changed,UID:ecff0074-9908-11e9-a678-fa163e0cec1d,ResourceVersion:1371982,Generation:0,CreationTimestamp:2019-06-27 18:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jun 27 18:25:32.713: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gxfm4,SelfLink:/api/v1/namespaces/e2e-tests-watch-gxfm4/configmaps/e2e-watch-test-label-changed,UID:ecff0074-9908-11e9-a678-fa163e0cec1d,ResourceVersion:1371983,Generation:0,CreationTimestamp:2019-06-27 18:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:25:32.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-gxfm4" for this suite.
Jun 27 18:25:38.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:25:38.866: INFO: namespace: e2e-tests-watch-gxfm4, resource: bindings, ignored listing per whitelist
Jun 27 18:25:38.880: INFO: namespace e2e-tests-watch-gxfm4 deletion completed in 6.127222775s

• [SLOW TEST:16.690 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:25:38.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jun 27 18:25:45.712: INFO: 8 pods remaining
Jun 27 18:25:45.712: INFO: 5 pods has nil DeletionTimestamp
Jun 27 18:25:45.712: INFO: 
Jun 27 18:25:46.690: INFO: 0 pods remaining
Jun 27 18:25:46.690: INFO: 0 pods has nil DeletionTimestamp
Jun 27 18:25:46.690: INFO: 
STEP: Gathering metrics
W0627 18:25:47.372243       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 27 18:25:47.372: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:25:47.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4l8gh" for this suite.
Jun 27 18:25:53.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:25:53.806: INFO: namespace: e2e-tests-gc-4l8gh, resource: bindings, ignored listing per whitelist
Jun 27 18:25:53.901: INFO: namespace e2e-tests-gc-4l8gh deletion completed in 6.522782427s

• [SLOW TEST:15.021 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:25:53.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-ffbb439e-9908-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 18:25:54.099: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ffbbb4c6-9908-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-r57hr" to be "success or failure"
Jun 27 18:25:54.108: INFO: Pod "pod-projected-configmaps-ffbbb4c6-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31775ms
Jun 27 18:25:56.136: INFO: Pod "pod-projected-configmaps-ffbbb4c6-9908-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036653202s
Jun 27 18:25:58.141: INFO: Pod "pod-projected-configmaps-ffbbb4c6-9908-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041302695s
STEP: Saw pod success
Jun 27 18:25:58.141: INFO: Pod "pod-projected-configmaps-ffbbb4c6-9908-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:25:58.144: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-ffbbb4c6-9908-11e9-8fa9-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jun 27 18:25:58.182: INFO: Waiting for pod pod-projected-configmaps-ffbbb4c6-9908-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:25:58.224: INFO: Pod pod-projected-configmaps-ffbbb4c6-9908-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:25:58.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-r57hr" for this suite.
Jun 27 18:26:04.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:26:04.447: INFO: namespace: e2e-tests-projected-r57hr, resource: bindings, ignored listing per whitelist
Jun 27 18:26:04.475: INFO: namespace e2e-tests-projected-r57hr deletion completed in 6.243809264s

• [SLOW TEST:10.573 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:26:04.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:26:09.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-vsb4r" for this suite.
Jun 27 18:26:31.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:26:31.792: INFO: namespace: e2e-tests-replication-controller-vsb4r, resource: bindings, ignored listing per whitelist
Jun 27 18:26:31.862: INFO: namespace e2e-tests-replication-controller-vsb4r deletion completed in 22.186851585s

• [SLOW TEST:27.386 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:26:31.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jun 27 18:26:31.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jun 27 18:26:32.242: INFO: stderr: ""
Jun 27 18:26:32.242: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.4:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:26:32.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jbcl6" for this suite.
Jun 27 18:26:38.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:26:38.396: INFO: namespace: e2e-tests-kubectl-jbcl6, resource: bindings, ignored listing per whitelist
Jun 27 18:26:38.403: INFO: namespace e2e-tests-kubectl-jbcl6 deletion completed in 6.157973355s

• [SLOW TEST:6.542 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:26:38.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-1a34cf85-9909-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 18:26:38.572: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1a3f87ad-9909-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-69k65" to be "success or failure"
Jun 27 18:26:38.594: INFO: Pod "pod-projected-secrets-1a3f87ad-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.299569ms
Jun 27 18:26:40.599: INFO: Pod "pod-projected-secrets-1a3f87ad-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026980216s
Jun 27 18:26:42.604: INFO: Pod "pod-projected-secrets-1a3f87ad-9909-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032396228s
STEP: Saw pod success
Jun 27 18:26:42.604: INFO: Pod "pod-projected-secrets-1a3f87ad-9909-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:26:42.609: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-secrets-1a3f87ad-9909-11e9-8fa9-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jun 27 18:26:42.641: INFO: Waiting for pod pod-projected-secrets-1a3f87ad-9909-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:26:42.648: INFO: Pod pod-projected-secrets-1a3f87ad-9909-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:26:42.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-69k65" for this suite.
Jun 27 18:26:48.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:26:48.914: INFO: namespace: e2e-tests-projected-69k65, resource: bindings, ignored listing per whitelist
Jun 27 18:26:48.925: INFO: namespace e2e-tests-projected-69k65 deletion completed in 6.268733525s

• [SLOW TEST:10.521 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:26:48.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-txxnt/configmap-test-20825f17-9909-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 18:26:49.098: INFO: Waiting up to 5m0s for pod "pod-configmaps-20835d9b-9909-11e9-8fa9-0242ac110005" in namespace "e2e-tests-configmap-txxnt" to be "success or failure"
Jun 27 18:26:49.105: INFO: Pod "pod-configmaps-20835d9b-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.512128ms
Jun 27 18:26:51.111: INFO: Pod "pod-configmaps-20835d9b-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012595602s
Jun 27 18:26:53.117: INFO: Pod "pod-configmaps-20835d9b-9909-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018922317s
STEP: Saw pod success
Jun 27 18:26:53.117: INFO: Pod "pod-configmaps-20835d9b-9909-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:26:53.121: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-20835d9b-9909-11e9-8fa9-0242ac110005 container env-test: 
STEP: delete the pod
Jun 27 18:26:53.167: INFO: Waiting for pod pod-configmaps-20835d9b-9909-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:26:53.174: INFO: Pod pod-configmaps-20835d9b-9909-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:26:53.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-txxnt" for this suite.
Jun 27 18:26:59.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:26:59.396: INFO: namespace: e2e-tests-configmap-txxnt, resource: bindings, ignored listing per whitelist
Jun 27 18:26:59.431: INFO: namespace e2e-tests-configmap-txxnt deletion completed in 6.253322426s

• [SLOW TEST:10.506 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:26:59.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-26c12597-9909-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 18:26:59.560: INFO: Waiting up to 5m0s for pod "pod-secrets-26c1d0be-9909-11e9-8fa9-0242ac110005" in namespace "e2e-tests-secrets-949zs" to be "success or failure"
Jun 27 18:26:59.578: INFO: Pod "pod-secrets-26c1d0be-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.880028ms
Jun 27 18:27:01.637: INFO: Pod "pod-secrets-26c1d0be-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077511573s
Jun 27 18:27:03.642: INFO: Pod "pod-secrets-26c1d0be-9909-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081660204s
STEP: Saw pod success
Jun 27 18:27:03.642: INFO: Pod "pod-secrets-26c1d0be-9909-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:27:03.645: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-26c1d0be-9909-11e9-8fa9-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jun 27 18:27:03.744: INFO: Waiting for pod pod-secrets-26c1d0be-9909-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:27:03.748: INFO: Pod pod-secrets-26c1d0be-9909-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:27:03.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-949zs" for this suite.
Jun 27 18:27:09.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:27:09.792: INFO: namespace: e2e-tests-secrets-949zs, resource: bindings, ignored listing per whitelist
Jun 27 18:27:09.924: INFO: namespace e2e-tests-secrets-949zs deletion completed in 6.167135266s

• [SLOW TEST:10.493 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:27:09.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jun 27 18:27:10.105: INFO: Number of nodes with available pods: 0
Jun 27 18:27:10.105: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:27:11.114: INFO: Number of nodes with available pods: 0
Jun 27 18:27:11.114: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:27:12.111: INFO: Number of nodes with available pods: 0
Jun 27 18:27:12.111: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:27:13.113: INFO: Number of nodes with available pods: 1
Jun 27 18:27:13.113: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jun 27 18:27:13.154: INFO: Number of nodes with available pods: 0
Jun 27 18:27:13.154: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:27:14.161: INFO: Number of nodes with available pods: 0
Jun 27 18:27:14.161: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:27:15.164: INFO: Number of nodes with available pods: 0
Jun 27 18:27:15.164: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:27:16.165: INFO: Number of nodes with available pods: 0
Jun 27 18:27:16.165: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:27:17.162: INFO: Number of nodes with available pods: 0
Jun 27 18:27:17.162: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:27:18.162: INFO: Number of nodes with available pods: 0
Jun 27 18:27:18.162: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:27:19.175: INFO: Number of nodes with available pods: 0
Jun 27 18:27:19.175: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:27:20.166: INFO: Number of nodes with available pods: 1
Jun 27 18:27:20.166: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-ds8cm, will wait for the garbage collector to delete the pods
Jun 27 18:27:20.236: INFO: Deleting DaemonSet.extensions daemon-set took: 12.838103ms
Jun 27 18:27:20.337: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.255637ms
Jun 27 18:27:25.843: INFO: Number of nodes with available pods: 0
Jun 27 18:27:25.843: INFO: Number of running nodes: 0, number of available pods: 0
Jun 27 18:27:25.847: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-ds8cm/daemonsets","resourceVersion":"1372442"},"items":null}

Jun 27 18:27:25.851: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-ds8cm/pods","resourceVersion":"1372442"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:27:25.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-ds8cm" for this suite.
Jun 27 18:27:31.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:27:31.940: INFO: namespace: e2e-tests-daemonsets-ds8cm, resource: bindings, ignored listing per whitelist
Jun 27 18:27:32.005: INFO: namespace e2e-tests-daemonsets-ds8cm deletion completed in 6.138151186s

• [SLOW TEST:22.081 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:27:32.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 18:27:32.089: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a25833e-9909-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-fsjqr" to be "success or failure"
Jun 27 18:27:32.110: INFO: Pod "downwardapi-volume-3a25833e-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.78448ms
Jun 27 18:27:34.132: INFO: Pod "downwardapi-volume-3a25833e-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043234601s
Jun 27 18:27:36.136: INFO: Pod "downwardapi-volume-3a25833e-9909-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047267893s
STEP: Saw pod success
Jun 27 18:27:36.136: INFO: Pod "downwardapi-volume-3a25833e-9909-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:27:36.139: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-3a25833e-9909-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 18:27:36.164: INFO: Waiting for pod downwardapi-volume-3a25833e-9909-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:27:36.174: INFO: Pod downwardapi-volume-3a25833e-9909-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:27:36.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fsjqr" for this suite.
Jun 27 18:27:42.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:27:42.214: INFO: namespace: e2e-tests-projected-fsjqr, resource: bindings, ignored listing per whitelist
Jun 27 18:27:42.305: INFO: namespace e2e-tests-projected-fsjqr deletion completed in 6.125512157s

• [SLOW TEST:10.300 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:27:42.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jun 27 18:27:42.406: INFO: Waiting up to 5m0s for pod "downward-api-404c0ea4-9909-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-wtjhr" to be "success or failure"
Jun 27 18:27:42.414: INFO: Pod "downward-api-404c0ea4-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.202213ms
Jun 27 18:27:44.417: INFO: Pod "downward-api-404c0ea4-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010410837s
Jun 27 18:27:46.422: INFO: Pod "downward-api-404c0ea4-9909-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01579916s
STEP: Saw pod success
Jun 27 18:27:46.422: INFO: Pod "downward-api-404c0ea4-9909-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:27:46.426: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downward-api-404c0ea4-9909-11e9-8fa9-0242ac110005 container dapi-container: 
STEP: delete the pod
Jun 27 18:27:46.496: INFO: Waiting for pod downward-api-404c0ea4-9909-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:27:46.504: INFO: Pod downward-api-404c0ea4-9909-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:27:46.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wtjhr" for this suite.
Jun 27 18:27:52.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:27:52.606: INFO: namespace: e2e-tests-downward-api-wtjhr, resource: bindings, ignored listing per whitelist
Jun 27 18:27:52.742: INFO: namespace e2e-tests-downward-api-wtjhr deletion completed in 6.22795829s

• [SLOW TEST:10.437 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:27:52.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-c59s4
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-c59s4
STEP: Deleting pre-stop pod
Jun 27 18:28:06.052: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:28:06.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-c59s4" for this suite.
Jun 27 18:28:44.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:28:44.268: INFO: namespace: e2e-tests-prestop-c59s4, resource: bindings, ignored listing per whitelist
Jun 27 18:28:44.294: INFO: namespace e2e-tests-prestop-c59s4 deletion completed in 38.161020974s

• [SLOW TEST:51.551 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:28:44.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 18:28:44.483: INFO: Waiting up to 5m0s for pod "downwardapi-volume-654c5260-9909-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-gsmxq" to be "success or failure"
Jun 27 18:28:44.497: INFO: Pod "downwardapi-volume-654c5260-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.127358ms
Jun 27 18:28:46.502: INFO: Pod "downwardapi-volume-654c5260-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019342241s
Jun 27 18:28:48.507: INFO: Pod "downwardapi-volume-654c5260-9909-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02340607s
STEP: Saw pod success
Jun 27 18:28:48.507: INFO: Pod "downwardapi-volume-654c5260-9909-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:28:48.512: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-654c5260-9909-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 18:28:48.538: INFO: Waiting for pod downwardapi-volume-654c5260-9909-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:28:48.547: INFO: Pod downwardapi-volume-654c5260-9909-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:28:48.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gsmxq" for this suite.
Jun 27 18:28:54.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:28:54.779: INFO: namespace: e2e-tests-projected-gsmxq, resource: bindings, ignored listing per whitelist
Jun 27 18:28:54.790: INFO: namespace e2e-tests-projected-gsmxq deletion completed in 6.239151761s

• [SLOW TEST:10.496 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:28:54.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jun 27 18:28:54.956: INFO: Waiting up to 5m0s for pod "downward-api-6b8a140c-9909-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-6g9v4" to be "success or failure"
Jun 27 18:28:54.969: INFO: Pod "downward-api-6b8a140c-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.555832ms
Jun 27 18:28:57.061: INFO: Pod "downward-api-6b8a140c-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105643845s
Jun 27 18:28:59.068: INFO: Pod "downward-api-6b8a140c-9909-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111970618s
STEP: Saw pod success
Jun 27 18:28:59.068: INFO: Pod "downward-api-6b8a140c-9909-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:28:59.072: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downward-api-6b8a140c-9909-11e9-8fa9-0242ac110005 container dapi-container: 
STEP: delete the pod
Jun 27 18:28:59.109: INFO: Waiting for pod downward-api-6b8a140c-9909-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:28:59.114: INFO: Pod downward-api-6b8a140c-9909-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:28:59.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6g9v4" for this suite.
Jun 27 18:29:05.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:29:05.290: INFO: namespace: e2e-tests-downward-api-6g9v4, resource: bindings, ignored listing per whitelist
Jun 27 18:29:05.412: INFO: namespace e2e-tests-downward-api-6g9v4 deletion completed in 6.295279622s

• [SLOW TEST:10.622 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:29:05.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jun 27 18:29:05.765: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-z2hct,SelfLink:/api/v1/namespaces/e2e-tests-watch-z2hct/configmaps/e2e-watch-test-resource-version,UID:71e8473a-9909-11e9-a678-fa163e0cec1d,ResourceVersion:1372724,Generation:0,CreationTimestamp:2019-06-27 18:29:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jun 27 18:29:05.766: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-z2hct,SelfLink:/api/v1/namespaces/e2e-tests-watch-z2hct/configmaps/e2e-watch-test-resource-version,UID:71e8473a-9909-11e9-a678-fa163e0cec1d,ResourceVersion:1372725,Generation:0,CreationTimestamp:2019-06-27 18:29:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:29:05.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-z2hct" for this suite.
Jun 27 18:29:11.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:29:11.855: INFO: namespace: e2e-tests-watch-z2hct, resource: bindings, ignored listing per whitelist
Jun 27 18:29:11.896: INFO: namespace e2e-tests-watch-z2hct deletion completed in 6.125737561s

• [SLOW TEST:6.483 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:29:11.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0627 18:29:13.065004       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 27 18:29:13.065: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:29:13.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-fzwtv" for this suite.
Jun 27 18:29:19.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:29:19.205: INFO: namespace: e2e-tests-gc-fzwtv, resource: bindings, ignored listing per whitelist
Jun 27 18:29:19.212: INFO: namespace e2e-tests-gc-fzwtv deletion completed in 6.1376753s

• [SLOW TEST:7.316 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:29:19.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun 27 18:29:19.312: INFO: Waiting up to 5m0s for pod "pod-7a0e8f2d-9909-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-wqgbd" to be "success or failure"
Jun 27 18:29:19.319: INFO: Pod "pod-7a0e8f2d-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.810834ms
Jun 27 18:29:21.324: INFO: Pod "pod-7a0e8f2d-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011593758s
Jun 27 18:29:23.329: INFO: Pod "pod-7a0e8f2d-9909-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01710381s
STEP: Saw pod success
Jun 27 18:29:23.329: INFO: Pod "pod-7a0e8f2d-9909-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:29:23.334: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-7a0e8f2d-9909-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 18:29:23.377: INFO: Waiting for pod pod-7a0e8f2d-9909-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:29:23.386: INFO: Pod pod-7a0e8f2d-9909-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:29:23.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wqgbd" for this suite.
Jun 27 18:29:29.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:29:29.509: INFO: namespace: e2e-tests-emptydir-wqgbd, resource: bindings, ignored listing per whitelist
Jun 27 18:29:29.578: INFO: namespace e2e-tests-emptydir-wqgbd deletion completed in 6.182489887s

• [SLOW TEST:10.366 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:29:29.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-80408df1-9909-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 18:29:29.721: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-80431006-9909-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-dqfn5" to be "success or failure"
Jun 27 18:29:29.735: INFO: Pod "pod-projected-secrets-80431006-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.172342ms
Jun 27 18:29:31.741: INFO: Pod "pod-projected-secrets-80431006-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020432473s
Jun 27 18:29:33.744: INFO: Pod "pod-projected-secrets-80431006-9909-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023575516s
STEP: Saw pod success
Jun 27 18:29:33.744: INFO: Pod "pod-projected-secrets-80431006-9909-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:29:33.746: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-secrets-80431006-9909-11e9-8fa9-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jun 27 18:29:33.785: INFO: Waiting for pod pod-projected-secrets-80431006-9909-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:29:33.816: INFO: Pod pod-projected-secrets-80431006-9909-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:29:33.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dqfn5" for this suite.
Jun 27 18:29:39.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:29:40.097: INFO: namespace: e2e-tests-projected-dqfn5, resource: bindings, ignored listing per whitelist
Jun 27 18:29:40.120: INFO: namespace e2e-tests-projected-dqfn5 deletion completed in 6.29967372s

• [SLOW TEST:10.541 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:29:40.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-86930a92-9909-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 18:29:40.348: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8697c83a-9909-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-9sfcf" to be "success or failure"
Jun 27 18:29:40.357: INFO: Pod "pod-projected-secrets-8697c83a-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.638494ms
Jun 27 18:29:42.410: INFO: Pod "pod-projected-secrets-8697c83a-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062245905s
Jun 27 18:29:44.414: INFO: Pod "pod-projected-secrets-8697c83a-9909-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06618901s
STEP: Saw pod success
Jun 27 18:29:44.414: INFO: Pod "pod-projected-secrets-8697c83a-9909-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:29:44.416: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-secrets-8697c83a-9909-11e9-8fa9-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jun 27 18:29:44.522: INFO: Waiting for pod pod-projected-secrets-8697c83a-9909-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:29:44.530: INFO: Pod pod-projected-secrets-8697c83a-9909-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:29:44.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9sfcf" for this suite.
Jun 27 18:29:50.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:29:50.602: INFO: namespace: e2e-tests-projected-9sfcf, resource: bindings, ignored listing per whitelist
Jun 27 18:29:50.684: INFO: namespace e2e-tests-projected-9sfcf deletion completed in 6.149396396s

• [SLOW TEST:10.564 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:29:50.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 18:29:50.864: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8cdd1cd3-9909-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-l4sfp" to be "success or failure"
Jun 27 18:29:50.898: INFO: Pod "downwardapi-volume-8cdd1cd3-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.644379ms
Jun 27 18:29:52.930: INFO: Pod "downwardapi-volume-8cdd1cd3-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065217988s
Jun 27 18:29:54.936: INFO: Pod "downwardapi-volume-8cdd1cd3-9909-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071651796s
STEP: Saw pod success
Jun 27 18:29:54.936: INFO: Pod "downwardapi-volume-8cdd1cd3-9909-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:29:54.940: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-8cdd1cd3-9909-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 18:29:54.981: INFO: Waiting for pod downwardapi-volume-8cdd1cd3-9909-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:29:54.986: INFO: Pod downwardapi-volume-8cdd1cd3-9909-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:29:54.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l4sfp" for this suite.
Jun 27 18:30:01.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:30:01.039: INFO: namespace: e2e-tests-projected-l4sfp, resource: bindings, ignored listing per whitelist
Jun 27 18:30:01.090: INFO: namespace e2e-tests-projected-l4sfp deletion completed in 6.09952555s

• [SLOW TEST:10.406 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:30:01.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:30:01.268: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jun 27 18:30:06.274: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jun 27 18:30:06.274: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jun 27 18:30:08.280: INFO: Creating deployment "test-rollover-deployment"
Jun 27 18:30:08.297: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jun 27 18:30:10.352: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jun 27 18:30:10.361: INFO: Ensure that both replica sets have 1 created replica
Jun 27 18:30:10.371: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jun 27 18:30:10.380: INFO: Updating deployment test-rollover-deployment
Jun 27 18:30:10.380: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jun 27 18:30:12.448: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jun 27 18:30:12.571: INFO: Make sure deployment "test-rollover-deployment" is complete
Jun 27 18:30:12.587: INFO: all replica sets need to contain the pod-template-hash label
Jun 27 18:30:12.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257010, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 27 18:30:14.596: INFO: all replica sets need to contain the pod-template-hash label
Jun 27 18:30:14.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257013, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 27 18:30:16.596: INFO: all replica sets need to contain the pod-template-hash label
Jun 27 18:30:16.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257013, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 27 18:30:18.597: INFO: all replica sets need to contain the pod-template-hash label
Jun 27 18:30:18.597: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257013, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 27 18:30:20.597: INFO: all replica sets need to contain the pod-template-hash label
Jun 27 18:30:20.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257013, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 27 18:30:22.599: INFO: all replica sets need to contain the pod-template-hash label
Jun 27 18:30:22.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257013, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697257008, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 27 18:30:24.598: INFO: 
Jun 27 18:30:24.598: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jun 27 18:30:24.608: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-54n96,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-54n96/deployments/test-rollover-deployment,UID:97423f6f-9909-11e9-a678-fa163e0cec1d,ResourceVersion:1373032,Generation:2,CreationTimestamp:2019-06-27 18:30:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-06-27 18:30:08 +0000 UTC 2019-06-27 18:30:08 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-06-27 18:30:23 +0000 UTC 2019-06-27 18:30:08 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-6b7f9d6597" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jun 27 18:30:24.612: INFO: New ReplicaSet "test-rollover-deployment-6b7f9d6597" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6b7f9d6597,GenerateName:,Namespace:e2e-tests-deployment-54n96,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-54n96/replicasets/test-rollover-deployment-6b7f9d6597,UID:98827bfe-9909-11e9-a678-fa163e0cec1d,ResourceVersion:1373023,Generation:2,CreationTimestamp:2019-06-27 18:30:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 97423f6f-9909-11e9-a678-fa163e0cec1d 0xc00225bec7 0xc00225bec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jun 27 18:30:24.612: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jun 27 18:30:24.612: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-54n96,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-54n96/replicasets/test-rollover-controller,UID:931217ce-9909-11e9-a678-fa163e0cec1d,ResourceVersion:1373031,Generation:2,CreationTimestamp:2019-06-27 18:30:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 97423f6f-9909-11e9-a678-fa163e0cec1d 0xc00225bcb7 0xc00225bcb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jun 27 18:30:24.612: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6586df867b,GenerateName:,Namespace:e2e-tests-deployment-54n96,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-54n96/replicasets/test-rollover-deployment-6586df867b,UID:9745b714-9909-11e9-a678-fa163e0cec1d,ResourceVersion:1372999,Generation:2,CreationTimestamp:2019-06-27 18:30:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6586df867b,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 97423f6f-9909-11e9-a678-fa163e0cec1d 0xc00225bd77 0xc00225bd78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6586df867b,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6586df867b,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jun 27 18:30:24.616: INFO: Pod "test-rollover-deployment-6b7f9d6597-cgwh2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6b7f9d6597-cgwh2,GenerateName:test-rollover-deployment-6b7f9d6597-,Namespace:e2e-tests-deployment-54n96,SelfLink:/api/v1/namespaces/e2e-tests-deployment-54n96/pods/test-rollover-deployment-6b7f9d6597-cgwh2,UID:98a16f10-9909-11e9-a678-fa163e0cec1d,ResourceVersion:1373008,Generation:0,CreationTimestamp:2019-06-27 18:30:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-6b7f9d6597 98827bfe-9909-11e9-a678-fa163e0cec1d 0xc001cc8de7 0xc001cc8de8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4vf4x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4vf4x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-4vf4x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cc8e50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cc8e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:30:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:30:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:30:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:30:10 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.5,StartTime:2019-06-27 18:30:10 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-06-27 18:30:12 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://36b00699288ed5b060bc72cf2d5d92c3b16d4aabfbd92bfb5dadde7c7a9265eb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:30:24.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-54n96" for this suite.
Jun 27 18:30:32.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:30:32.688: INFO: namespace: e2e-tests-deployment-54n96, resource: bindings, ignored listing per whitelist
Jun 27 18:30:32.801: INFO: namespace e2e-tests-deployment-54n96 deletion completed in 8.179593811s

• [SLOW TEST:31.710 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:30:32.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:30:36.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-vgkqz" for this suite.
Jun 27 18:31:29.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:31:29.065: INFO: namespace: e2e-tests-kubelet-test-vgkqz, resource: bindings, ignored listing per whitelist
Jun 27 18:31:29.122: INFO: namespace e2e-tests-kubelet-test-vgkqz deletion completed in 52.149753825s

• [SLOW TEST:56.321 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:31:29.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jun 27 18:31:29.231: INFO: Waiting up to 5m0s for pod "var-expansion-c77f343c-9909-11e9-8fa9-0242ac110005" in namespace "e2e-tests-var-expansion-75dzb" to be "success or failure"
Jun 27 18:31:29.245: INFO: Pod "var-expansion-c77f343c-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.004147ms
Jun 27 18:31:31.251: INFO: Pod "var-expansion-c77f343c-9909-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019912455s
Jun 27 18:31:33.253: INFO: Pod "var-expansion-c77f343c-9909-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022652606s
STEP: Saw pod success
Jun 27 18:31:33.253: INFO: Pod "var-expansion-c77f343c-9909-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:31:33.255: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod var-expansion-c77f343c-9909-11e9-8fa9-0242ac110005 container dapi-container: 
STEP: delete the pod
Jun 27 18:31:33.399: INFO: Waiting for pod var-expansion-c77f343c-9909-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:31:33.717: INFO: Pod var-expansion-c77f343c-9909-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:31:33.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-75dzb" for this suite.
Jun 27 18:31:39.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:31:39.863: INFO: namespace: e2e-tests-var-expansion-75dzb, resource: bindings, ignored listing per whitelist
Jun 27 18:31:39.885: INFO: namespace e2e-tests-var-expansion-75dzb deletion completed in 6.161485401s

• [SLOW TEST:10.763 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:31:39.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-cs5kk
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cs5kk to expose endpoints map[]
Jun 27 18:31:40.201: INFO: Get endpoints failed (19.557597ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jun 27 18:31:41.205: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cs5kk exposes endpoints map[] (1.023534823s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-cs5kk
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cs5kk to expose endpoints map[pod1:[100]]
Jun 27 18:31:44.320: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cs5kk exposes endpoints map[pod1:[100]] (3.106542685s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-cs5kk
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cs5kk to expose endpoints map[pod1:[100] pod2:[101]]
Jun 27 18:31:47.403: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cs5kk exposes endpoints map[pod1:[100] pod2:[101]] (3.079798601s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-cs5kk
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cs5kk to expose endpoints map[pod2:[101]]
Jun 27 18:31:47.448: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cs5kk exposes endpoints map[pod2:[101]] (34.763962ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-cs5kk
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cs5kk to expose endpoints map[]
Jun 27 18:31:48.468: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cs5kk exposes endpoints map[] (1.009908001s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:31:48.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-cs5kk" for this suite.
Jun 27 18:32:12.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:32:12.713: INFO: namespace: e2e-tests-services-cs5kk, resource: bindings, ignored listing per whitelist
Jun 27 18:32:12.715: INFO: namespace e2e-tests-services-cs5kk deletion completed in 24.119994496s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:32.829 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:32:12.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-e17df711-9909-11e9-8fa9-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-e17df711-9909-11e9-8fa9-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:33:25.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6hhkn" for this suite.
Jun 27 18:33:49.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:33:49.642: INFO: namespace: e2e-tests-projected-6hhkn, resource: bindings, ignored listing per whitelist
Jun 27 18:33:49.694: INFO: namespace e2e-tests-projected-6hhkn deletion completed in 24.20536962s

• [SLOW TEST:96.979 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:33:49.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jun 27 18:33:50.097: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:33:55.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-zv5nc" for this suite.
Jun 27 18:34:01.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:34:01.710: INFO: namespace: e2e-tests-init-container-zv5nc, resource: bindings, ignored listing per whitelist
Jun 27 18:34:01.892: INFO: namespace e2e-tests-init-container-zv5nc deletion completed in 6.318209307s

• [SLOW TEST:12.197 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:34:01.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jun 27 18:34:03.147: INFO: Pod name wrapped-volume-race-23341953-990a-11e9-8fa9-0242ac110005: Found 0 pods out of 5
Jun 27 18:34:08.155: INFO: Pod name wrapped-volume-race-23341953-990a-11e9-8fa9-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-23341953-990a-11e9-8fa9-0242ac110005 in namespace e2e-tests-emptydir-wrapper-blmlc, will wait for the garbage collector to delete the pods
Jun 27 18:36:24.249: INFO: Deleting ReplicationController wrapped-volume-race-23341953-990a-11e9-8fa9-0242ac110005 took: 7.39759ms
Jun 27 18:36:24.850: INFO: Terminating ReplicationController wrapped-volume-race-23341953-990a-11e9-8fa9-0242ac110005 pods took: 600.306014ms
STEP: Creating RC which spawns configmap-volume pods
Jun 27 18:37:06.657: INFO: Pod name wrapped-volume-race-908eaacf-990a-11e9-8fa9-0242ac110005: Found 0 pods out of 5
Jun 27 18:37:11.663: INFO: Pod name wrapped-volume-race-908eaacf-990a-11e9-8fa9-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-908eaacf-990a-11e9-8fa9-0242ac110005 in namespace e2e-tests-emptydir-wrapper-blmlc, will wait for the garbage collector to delete the pods
Jun 27 18:39:41.790: INFO: Deleting ReplicationController wrapped-volume-race-908eaacf-990a-11e9-8fa9-0242ac110005 took: 12.800285ms
Jun 27 18:39:42.090: INFO: Terminating ReplicationController wrapped-volume-race-908eaacf-990a-11e9-8fa9-0242ac110005 pods took: 300.25029ms
STEP: Creating RC which spawns configmap-volume pods
Jun 27 18:40:25.968: INFO: Pod name wrapped-volume-race-075f9fe4-990b-11e9-8fa9-0242ac110005: Found 0 pods out of 5
Jun 27 18:40:30.980: INFO: Pod name wrapped-volume-race-075f9fe4-990b-11e9-8fa9-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-075f9fe4-990b-11e9-8fa9-0242ac110005 in namespace e2e-tests-emptydir-wrapper-blmlc, will wait for the garbage collector to delete the pods
Jun 27 18:42:45.069: INFO: Deleting ReplicationController wrapped-volume-race-075f9fe4-990b-11e9-8fa9-0242ac110005 took: 10.295642ms
Jun 27 18:42:45.369: INFO: Terminating ReplicationController wrapped-volume-race-075f9fe4-990b-11e9-8fa9-0242ac110005 pods took: 300.206766ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:43:26.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-blmlc" for this suite.
Jun 27 18:43:34.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:43:34.824: INFO: namespace: e2e-tests-emptydir-wrapper-blmlc, resource: bindings, ignored listing per whitelist
Jun 27 18:43:34.873: INFO: namespace e2e-tests-emptydir-wrapper-blmlc deletion completed in 8.095754213s

• [SLOW TEST:572.981 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:43:34.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 18:43:35.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-781750c6-990b-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-xk5fc" to be "success or failure"
Jun 27 18:43:35.024: INFO: Pod "downwardapi-volume-781750c6-990b-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.957936ms
Jun 27 18:43:37.029: INFO: Pod "downwardapi-volume-781750c6-990b-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022136912s
Jun 27 18:43:39.034: INFO: Pod "downwardapi-volume-781750c6-990b-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027080574s
STEP: Saw pod success
Jun 27 18:43:39.034: INFO: Pod "downwardapi-volume-781750c6-990b-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:43:39.037: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-781750c6-990b-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 18:43:39.092: INFO: Waiting for pod downwardapi-volume-781750c6-990b-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:43:39.099: INFO: Pod downwardapi-volume-781750c6-990b-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:43:39.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xk5fc" for this suite.
Jun 27 18:43:45.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:43:45.245: INFO: namespace: e2e-tests-downward-api-xk5fc, resource: bindings, ignored listing per whitelist
Jun 27 18:43:45.315: INFO: namespace e2e-tests-downward-api-xk5fc deletion completed in 6.211072675s

• [SLOW TEST:10.442 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:43:45.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:43:49.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-c2dnz" for this suite.
Jun 27 18:44:41.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:44:41.642: INFO: namespace: e2e-tests-kubelet-test-c2dnz, resource: bindings, ignored listing per whitelist
Jun 27 18:44:41.667: INFO: namespace e2e-tests-kubelet-test-c2dnz deletion completed in 52.177470608s

• [SLOW TEST:56.352 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:44:41.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jun 27 18:44:41.917: INFO: Pod name pod-release: Found 0 pods out of 1
Jun 27 18:44:46.973: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:44:47.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-2lrzf" for this suite.
Jun 27 18:44:53.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:44:53.515: INFO: namespace: e2e-tests-replication-controller-2lrzf, resource: bindings, ignored listing per whitelist
Jun 27 18:44:53.529: INFO: namespace e2e-tests-replication-controller-2lrzf deletion completed in 6.466315925s

• [SLOW TEST:11.861 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:44:53.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-pbx4q
Jun 27 18:44:57.902: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-pbx4q
STEP: checking the pod's current state and verifying that restartCount is present
Jun 27 18:44:57.906: INFO: Initial restart count of pod liveness-exec is 0
Jun 27 18:45:52.050: INFO: Restart count of pod e2e-tests-container-probe-pbx4q/liveness-exec is now 1 (54.144054088s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:45:52.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-pbx4q" for this suite.
Jun 27 18:45:58.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:45:58.123: INFO: namespace: e2e-tests-container-probe-pbx4q, resource: bindings, ignored listing per whitelist
Jun 27 18:45:58.162: INFO: namespace e2e-tests-container-probe-pbx4q deletion completed in 6.086881234s

• [SLOW TEST:64.632 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:45:58.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:46:04.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-nj8zw" for this suite.
Jun 27 18:46:10.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:46:10.874: INFO: namespace: e2e-tests-namespaces-nj8zw, resource: bindings, ignored listing per whitelist
Jun 27 18:46:10.936: INFO: namespace e2e-tests-namespaces-nj8zw deletion completed in 6.171717472s
STEP: Destroying namespace "e2e-tests-nsdeletetest-mcrpz" for this suite.
Jun 27 18:46:10.938: INFO: Namespace e2e-tests-nsdeletetest-mcrpz was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-422rb" for this suite.
Jun 27 18:46:16.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:46:17.012: INFO: namespace: e2e-tests-nsdeletetest-422rb, resource: bindings, ignored listing per whitelist
Jun 27 18:46:17.065: INFO: namespace e2e-tests-nsdeletetest-422rb deletion completed in 6.126326384s

• [SLOW TEST:18.903 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:46:17.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-d8c5eec5-990b-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 18:46:17.248: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d8c78e44-990b-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-x257w" to be "success or failure"
Jun 27 18:46:17.252: INFO: Pod "pod-projected-configmaps-d8c78e44-990b-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265489ms
Jun 27 18:46:19.269: INFO: Pod "pod-projected-configmaps-d8c78e44-990b-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021105719s
Jun 27 18:46:21.275: INFO: Pod "pod-projected-configmaps-d8c78e44-990b-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027043092s
STEP: Saw pod success
Jun 27 18:46:21.275: INFO: Pod "pod-projected-configmaps-d8c78e44-990b-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:46:21.278: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-d8c78e44-990b-11e9-8fa9-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jun 27 18:46:21.320: INFO: Waiting for pod pod-projected-configmaps-d8c78e44-990b-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:46:21.342: INFO: Pod pod-projected-configmaps-d8c78e44-990b-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:46:21.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-x257w" for this suite.
Jun 27 18:46:27.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:46:27.559: INFO: namespace: e2e-tests-projected-x257w, resource: bindings, ignored listing per whitelist
Jun 27 18:46:27.639: INFO: namespace e2e-tests-projected-x257w deletion completed in 6.292861741s

• [SLOW TEST:10.574 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:46:27.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:46:27.948: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jun 27 18:46:28.262: INFO: Number of nodes with available pods: 0
Jun 27 18:46:28.262: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jun 27 18:46:28.514: INFO: Number of nodes with available pods: 0
Jun 27 18:46:28.514: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:29.523: INFO: Number of nodes with available pods: 0
Jun 27 18:46:29.523: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:30.520: INFO: Number of nodes with available pods: 0
Jun 27 18:46:30.520: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:31.520: INFO: Number of nodes with available pods: 1
Jun 27 18:46:31.520: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jun 27 18:46:31.572: INFO: Number of nodes with available pods: 1
Jun 27 18:46:31.572: INFO: Number of running nodes: 0, number of available pods: 1
Jun 27 18:46:32.577: INFO: Number of nodes with available pods: 0
Jun 27 18:46:32.577: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jun 27 18:46:32.688: INFO: Number of nodes with available pods: 0
Jun 27 18:46:32.688: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:33.692: INFO: Number of nodes with available pods: 0
Jun 27 18:46:33.692: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:34.691: INFO: Number of nodes with available pods: 0
Jun 27 18:46:34.691: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:35.695: INFO: Number of nodes with available pods: 0
Jun 27 18:46:35.695: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:36.693: INFO: Number of nodes with available pods: 0
Jun 27 18:46:36.693: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:37.698: INFO: Number of nodes with available pods: 0
Jun 27 18:46:37.698: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:38.694: INFO: Number of nodes with available pods: 0
Jun 27 18:46:38.694: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:39.693: INFO: Number of nodes with available pods: 0
Jun 27 18:46:39.693: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:40.694: INFO: Number of nodes with available pods: 0
Jun 27 18:46:40.694: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:41.699: INFO: Number of nodes with available pods: 0
Jun 27 18:46:41.699: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:42.694: INFO: Number of nodes with available pods: 0
Jun 27 18:46:42.694: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:43.697: INFO: Number of nodes with available pods: 0
Jun 27 18:46:43.697: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:44.691: INFO: Number of nodes with available pods: 0
Jun 27 18:46:44.691: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:45.742: INFO: Number of nodes with available pods: 0
Jun 27 18:46:45.742: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:46.693: INFO: Number of nodes with available pods: 0
Jun 27 18:46:46.693: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:47.851: INFO: Number of nodes with available pods: 0
Jun 27 18:46:47.852: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:48.695: INFO: Number of nodes with available pods: 0
Jun 27 18:46:48.695: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:46:49.748: INFO: Number of nodes with available pods: 1
Jun 27 18:46:49.748: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-dnn29, will wait for the garbage collector to delete the pods
Jun 27 18:46:49.828: INFO: Deleting DaemonSet.extensions daemon-set took: 14.676953ms
Jun 27 18:46:49.929: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.201163ms
Jun 27 18:46:55.782: INFO: Number of nodes with available pods: 0
Jun 27 18:46:55.782: INFO: Number of running nodes: 0, number of available pods: 0
Jun 27 18:46:55.785: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dnn29/daemonsets","resourceVersion":"1375101"},"items":null}

Jun 27 18:46:55.788: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dnn29/pods","resourceVersion":"1375101"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:46:55.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-dnn29" for this suite.
Jun 27 18:47:01.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:47:01.920: INFO: namespace: e2e-tests-daemonsets-dnn29, resource: bindings, ignored listing per whitelist
Jun 27 18:47:01.964: INFO: namespace e2e-tests-daemonsets-dnn29 deletion completed in 6.131598917s

• [SLOW TEST:34.325 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:47:01.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 18:47:02.158: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f38fccbb-990b-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-9jkpm" to be "success or failure"
Jun 27 18:47:02.258: INFO: Pod "downwardapi-volume-f38fccbb-990b-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 100.630879ms
Jun 27 18:47:04.303: INFO: Pod "downwardapi-volume-f38fccbb-990b-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145226736s
Jun 27 18:47:06.307: INFO: Pod "downwardapi-volume-f38fccbb-990b-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148964907s
STEP: Saw pod success
Jun 27 18:47:06.307: INFO: Pod "downwardapi-volume-f38fccbb-990b-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:47:06.310: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-f38fccbb-990b-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 18:47:06.408: INFO: Waiting for pod downwardapi-volume-f38fccbb-990b-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:47:06.413: INFO: Pod downwardapi-volume-f38fccbb-990b-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:47:06.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9jkpm" for this suite.
Jun 27 18:47:12.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:47:12.465: INFO: namespace: e2e-tests-projected-9jkpm, resource: bindings, ignored listing per whitelist
Jun 27 18:47:12.509: INFO: namespace e2e-tests-projected-9jkpm deletion completed in 6.090149032s

• [SLOW TEST:10.545 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:47:12.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-wqbjp
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-wqbjp
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-wqbjp
Jun 27 18:47:12.700: INFO: Found 0 stateful pods, waiting for 1
Jun 27 18:47:22.705: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jun 27 18:47:22.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqbjp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 27 18:47:23.074: INFO: stderr: ""
Jun 27 18:47:23.074: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 27 18:47:23.074: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jun 27 18:47:23.078: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jun 27 18:47:33.083: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jun 27 18:47:33.083: INFO: Waiting for statefulset status.replicas updated to 0
Jun 27 18:47:33.108: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999848s
Jun 27 18:47:34.183: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.985425738s
Jun 27 18:47:35.188: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.910466536s
Jun 27 18:47:36.197: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.905250823s
Jun 27 18:47:37.200: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.897010718s
Jun 27 18:47:38.209: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.893539874s
Jun 27 18:47:39.214: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.884688224s
Jun 27 18:47:40.221: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.880141734s
Jun 27 18:47:41.226: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.872639466s
Jun 27 18:47:42.229: INFO: Verifying statefulset ss doesn't scale past 1 for another 868.264065ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-wqbjp
Jun 27 18:47:43.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqbjp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 27 18:47:43.461: INFO: stderr: ""
Jun 27 18:47:43.461: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 27 18:47:43.461: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jun 27 18:47:43.464: INFO: Found 1 stateful pods, waiting for 3
Jun 27 18:47:53.557: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 27 18:47:53.557: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 27 18:47:53.557: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jun 27 18:47:53.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqbjp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 27 18:47:53.914: INFO: stderr: ""
Jun 27 18:47:53.914: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 27 18:47:53.914: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jun 27 18:47:53.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqbjp ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 27 18:47:54.193: INFO: stderr: ""
Jun 27 18:47:54.193: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 27 18:47:54.193: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jun 27 18:47:54.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqbjp ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 27 18:47:54.648: INFO: stderr: ""
Jun 27 18:47:54.648: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 27 18:47:54.648: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jun 27 18:47:54.648: INFO: Waiting for statefulset status.replicas updated to 0
Jun 27 18:47:54.654: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jun 27 18:48:04.659: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jun 27 18:48:04.659: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jun 27 18:48:04.659: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jun 27 18:48:04.670: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999787s
Jun 27 18:48:05.676: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996154501s
Jun 27 18:48:06.688: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989932859s
Jun 27 18:48:07.698: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.97861301s
Jun 27 18:48:08.702: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.967951555s
Jun 27 18:48:09.710: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.964280791s
Jun 27 18:48:10.717: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.956235376s
Jun 27 18:48:11.723: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.949449022s
Jun 27 18:48:12.728: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.943240338s
Jun 27 18:48:13.735: INFO: Verifying statefulset ss doesn't scale past 3 for another 938.538973ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-wqbjp
Jun 27 18:48:14.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqbjp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 27 18:48:14.956: INFO: stderr: ""
Jun 27 18:48:14.956: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 27 18:48:14.956: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jun 27 18:48:14.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqbjp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 27 18:48:15.164: INFO: stderr: ""
Jun 27 18:48:15.164: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 27 18:48:15.164: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jun 27 18:48:15.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wqbjp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 27 18:48:15.416: INFO: stderr: ""
Jun 27 18:48:15.416: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 27 18:48:15.416: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jun 27 18:48:15.416: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jun 27 18:48:35.453: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wqbjp
Jun 27 18:48:35.457: INFO: Scaling statefulset ss to 0
Jun 27 18:48:35.471: INFO: Waiting for statefulset status.replicas updated to 0
Jun 27 18:48:35.475: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:48:35.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-wqbjp" for this suite.
Jun 27 18:48:41.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:48:41.646: INFO: namespace: e2e-tests-statefulset-wqbjp, resource: bindings, ignored listing per whitelist
Jun 27 18:48:41.726: INFO: namespace e2e-tests-statefulset-wqbjp deletion completed in 6.223060633s

• [SLOW TEST:89.217 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:48:41.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jun 27 18:48:41.912: INFO: Waiting up to 5m0s for pod "pod-2f03e440-990c-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-ljh2m" to be "success or failure"
Jun 27 18:48:42.017: INFO: Pod "pod-2f03e440-990c-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 104.392193ms
Jun 27 18:48:44.020: INFO: Pod "pod-2f03e440-990c-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108189552s
Jun 27 18:48:46.024: INFO: Pod "pod-2f03e440-990c-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111601869s
STEP: Saw pod success
Jun 27 18:48:46.024: INFO: Pod "pod-2f03e440-990c-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:48:46.026: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-2f03e440-990c-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 18:48:46.065: INFO: Waiting for pod pod-2f03e440-990c-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:48:46.166: INFO: Pod pod-2f03e440-990c-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:48:46.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ljh2m" for this suite.
Jun 27 18:48:52.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:48:52.237: INFO: namespace: e2e-tests-emptydir-ljh2m, resource: bindings, ignored listing per whitelist
Jun 27 18:48:52.274: INFO: namespace e2e-tests-emptydir-ljh2m deletion completed in 6.102983716s

• [SLOW TEST:10.547 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:48:52.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:48:52.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-c2kkz" for this suite.
Jun 27 18:48:58.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:48:58.572: INFO: namespace: e2e-tests-services-c2kkz, resource: bindings, ignored listing per whitelist
Jun 27 18:48:58.805: INFO: namespace e2e-tests-services-c2kkz deletion completed in 6.305039688s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.531 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:48:58.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jun 27 18:49:03.049: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:49:28.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-mms9h" for this suite.
Jun 27 18:49:34.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:49:34.216: INFO: namespace: e2e-tests-namespaces-mms9h, resource: bindings, ignored listing per whitelist
Jun 27 18:49:34.275: INFO: namespace e2e-tests-namespaces-mms9h deletion completed in 6.132107985s
STEP: Destroying namespace "e2e-tests-nsdeletetest-n2h9m" for this suite.
Jun 27 18:49:34.276: INFO: Namespace e2e-tests-nsdeletetest-n2h9m was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-lct72" for this suite.
Jun 27 18:49:40.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:49:40.310: INFO: namespace: e2e-tests-nsdeletetest-lct72, resource: bindings, ignored listing per whitelist
Jun 27 18:49:40.411: INFO: namespace e2e-tests-nsdeletetest-lct72 deletion completed in 6.134693401s

• [SLOW TEST:41.606 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:49:40.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jun 27 18:49:40.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-qp4mj'
Jun 27 18:49:42.067: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jun 27 18:49:42.067: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jun 27 18:49:42.091: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jun 27 18:49:42.099: INFO: scanned /root for discovery docs: 
Jun 27 18:49:42.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-qp4mj'
Jun 27 18:49:58.006: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jun 27 18:49:58.006: INFO: stdout: "Created e2e-test-nginx-rc-252a4ccc1ed34130c2e755349eea18ce\nScaling up e2e-test-nginx-rc-252a4ccc1ed34130c2e755349eea18ce from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-252a4ccc1ed34130c2e755349eea18ce up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-252a4ccc1ed34130c2e755349eea18ce to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jun 27 18:49:58.006: INFO: stdout: "Created e2e-test-nginx-rc-252a4ccc1ed34130c2e755349eea18ce\nScaling up e2e-test-nginx-rc-252a4ccc1ed34130c2e755349eea18ce from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-252a4ccc1ed34130c2e755349eea18ce up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-252a4ccc1ed34130c2e755349eea18ce to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jun 27 18:49:58.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qp4mj'
Jun 27 18:49:58.106: INFO: stderr: ""
Jun 27 18:49:58.106: INFO: stdout: "e2e-test-nginx-rc-252a4ccc1ed34130c2e755349eea18ce-pn55g "
Jun 27 18:49:58.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-252a4ccc1ed34130c2e755349eea18ce-pn55g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qp4mj'
Jun 27 18:49:58.206: INFO: stderr: ""
Jun 27 18:49:58.206: INFO: stdout: "true"
Jun 27 18:49:58.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-252a4ccc1ed34130c2e755349eea18ce-pn55g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qp4mj'
Jun 27 18:49:58.339: INFO: stderr: ""
Jun 27 18:49:58.339: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jun 27 18:49:58.339: INFO: e2e-test-nginx-rc-252a4ccc1ed34130c2e755349eea18ce-pn55g is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jun 27 18:49:58.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qp4mj'
Jun 27 18:49:58.431: INFO: stderr: ""
Jun 27 18:49:58.431: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:49:58.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qp4mj" for this suite.
Jun 27 18:50:20.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:50:20.608: INFO: namespace: e2e-tests-kubectl-qp4mj, resource: bindings, ignored listing per whitelist
Jun 27 18:50:20.636: INFO: namespace e2e-tests-kubectl-qp4mj deletion completed in 22.151805699s

• [SLOW TEST:40.225 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:50:20.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:50:20.750: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jun 27 18:50:25.766: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jun 27 18:50:25.766: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jun 27 18:50:25.795: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-dbcxc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dbcxc/deployments/test-cleanup-deployment,UID:6cf082e1-990c-11e9-a678-fa163e0cec1d,ResourceVersion:1375791,Generation:1,CreationTimestamp:2019-06-27 18:50:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jun 27 18:50:25.800: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:50:25.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-dbcxc" for this suite.
Jun 27 18:50:32.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:50:32.204: INFO: namespace: e2e-tests-deployment-dbcxc, resource: bindings, ignored listing per whitelist
Jun 27 18:50:32.227: INFO: namespace e2e-tests-deployment-dbcxc deletion completed in 6.391391744s

• [SLOW TEST:11.590 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:50:32.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 18:50:32.332: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70d5ddd2-990c-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-94x5f" to be "success or failure"
Jun 27 18:50:32.432: INFO: Pod "downwardapi-volume-70d5ddd2-990c-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 99.859246ms
Jun 27 18:50:34.436: INFO: Pod "downwardapi-volume-70d5ddd2-990c-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103683468s
Jun 27 18:50:36.441: INFO: Pod "downwardapi-volume-70d5ddd2-990c-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108272113s
STEP: Saw pod success
Jun 27 18:50:36.441: INFO: Pod "downwardapi-volume-70d5ddd2-990c-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:50:36.443: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-70d5ddd2-990c-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 18:50:36.472: INFO: Waiting for pod downwardapi-volume-70d5ddd2-990c-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:50:36.475: INFO: Pod downwardapi-volume-70d5ddd2-990c-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:50:36.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-94x5f" for this suite.
Jun 27 18:50:42.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:50:42.544: INFO: namespace: e2e-tests-projected-94x5f, resource: bindings, ignored listing per whitelist
Jun 27 18:50:42.647: INFO: namespace e2e-tests-projected-94x5f deletion completed in 6.168090481s

• [SLOW TEST:10.421 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:50:42.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:50:42.816: INFO: (0) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 6.264069ms)
Jun 27 18:50:42.819: INFO: (1) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.676496ms)
Jun 27 18:50:42.821: INFO: (2) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.324061ms)
Jun 27 18:50:42.824: INFO: (3) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.384344ms)
Jun 27 18:50:42.826: INFO: (4) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.533468ms)
Jun 27 18:50:42.828: INFO: (5) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 1.952071ms)
Jun 27 18:50:42.830: INFO: (6) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.095723ms)
Jun 27 18:50:42.833: INFO: (7) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.477223ms)
Jun 27 18:50:42.835: INFO: (8) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.575319ms)
Jun 27 18:50:42.838: INFO: (9) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.629704ms)
Jun 27 18:50:42.840: INFO: (10) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.180983ms)
Jun 27 18:50:42.843: INFO: (11) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.393648ms)
Jun 27 18:50:42.845: INFO: (12) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.408002ms)
Jun 27 18:50:42.847: INFO: (13) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.12811ms)
Jun 27 18:50:42.849: INFO: (14) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.100234ms)
Jun 27 18:50:42.852: INFO: (15) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.356605ms)
Jun 27 18:50:42.854: INFO: (16) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.158633ms)
Jun 27 18:50:42.856: INFO: (17) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.262891ms)
Jun 27 18:50:42.858: INFO: (18) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 1.988792ms)
Jun 27 18:50:42.860: INFO: (19) /api/v1/nodes/hunter-server-x6tdbol33slm/proxy/logs/: 
alternatives.log
apt/
... (200; 2.155661ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:50:42.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-lbz89" for this suite.
Jun 27 18:50:48.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:50:49.033: INFO: namespace: e2e-tests-proxy-lbz89, resource: bindings, ignored listing per whitelist
Jun 27 18:50:49.050: INFO: namespace e2e-tests-proxy-lbz89 deletion completed in 6.187636278s

• [SLOW TEST:6.402 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:50:49.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:50:49.225: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jun 27 18:50:49.248: INFO: Number of nodes with available pods: 0
Jun 27 18:50:49.248: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:50:50.260: INFO: Number of nodes with available pods: 0
Jun 27 18:50:50.260: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:50:51.257: INFO: Number of nodes with available pods: 0
Jun 27 18:50:51.257: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:50:52.258: INFO: Number of nodes with available pods: 1
Jun 27 18:50:52.258: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jun 27 18:50:52.309: INFO: Wrong image for pod: daemon-set-pqf9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jun 27 18:50:53.319: INFO: Wrong image for pod: daemon-set-pqf9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jun 27 18:50:54.319: INFO: Wrong image for pod: daemon-set-pqf9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jun 27 18:50:55.318: INFO: Wrong image for pod: daemon-set-pqf9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jun 27 18:50:55.318: INFO: Pod daemon-set-pqf9w is not available
Jun 27 18:50:56.319: INFO: Pod daemon-set-7zr9f is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jun 27 18:50:56.328: INFO: Number of nodes with available pods: 0
Jun 27 18:50:56.328: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:50:57.337: INFO: Number of nodes with available pods: 0
Jun 27 18:50:57.337: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:50:58.338: INFO: Number of nodes with available pods: 0
Jun 27 18:50:58.338: INFO: Node hunter-server-x6tdbol33slm is running more than one daemon pod
Jun 27 18:50:59.342: INFO: Number of nodes with available pods: 1
Jun 27 18:50:59.342: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-rsswl, will wait for the garbage collector to delete the pods
Jun 27 18:50:59.456: INFO: Deleting DaemonSet.extensions daemon-set took: 38.262454ms
Jun 27 18:50:59.556: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.251703ms
Jun 27 18:51:02.470: INFO: Number of nodes with available pods: 0
Jun 27 18:51:02.470: INFO: Number of running nodes: 0, number of available pods: 0
Jun 27 18:51:02.477: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rsswl/daemonsets","resourceVersion":"1375953"},"items":null}

Jun 27 18:51:02.512: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rsswl/pods","resourceVersion":"1375953"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:51:02.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-rsswl" for this suite.
Jun 27 18:51:08.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:51:08.689: INFO: namespace: e2e-tests-daemonsets-rsswl, resource: bindings, ignored listing per whitelist
Jun 27 18:51:08.730: INFO: namespace e2e-tests-daemonsets-rsswl deletion completed in 6.20311321s

• [SLOW TEST:19.680 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:51:08.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jun 27 18:51:08.849: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:51:12.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-p4vkm" for this suite.
Jun 27 18:51:18.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:51:18.811: INFO: namespace: e2e-tests-init-container-p4vkm, resource: bindings, ignored listing per whitelist
Jun 27 18:51:18.842: INFO: namespace e2e-tests-init-container-p4vkm deletion completed in 6.176219604s

• [SLOW TEST:10.112 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:51:18.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jun 27 18:51:19.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bz5rq'
Jun 27 18:51:19.716: INFO: stderr: ""
Jun 27 18:51:19.716: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 27 18:51:19.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bz5rq'
Jun 27 18:51:19.943: INFO: stderr: ""
Jun 27 18:51:19.943: INFO: stdout: "update-demo-nautilus-2gmql update-demo-nautilus-f5mxr "
Jun 27 18:51:19.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gmql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bz5rq'
Jun 27 18:51:20.082: INFO: stderr: ""
Jun 27 18:51:20.082: INFO: stdout: ""
Jun 27 18:51:20.082: INFO: update-demo-nautilus-2gmql is created but not running
Jun 27 18:51:25.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bz5rq'
Jun 27 18:51:25.212: INFO: stderr: ""
Jun 27 18:51:25.213: INFO: stdout: "update-demo-nautilus-2gmql update-demo-nautilus-f5mxr "
Jun 27 18:51:25.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gmql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bz5rq'
Jun 27 18:51:25.310: INFO: stderr: ""
Jun 27 18:51:25.310: INFO: stdout: "true"
Jun 27 18:51:25.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gmql -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bz5rq'
Jun 27 18:51:25.415: INFO: stderr: ""
Jun 27 18:51:25.415: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 27 18:51:25.415: INFO: validating pod update-demo-nautilus-2gmql
Jun 27 18:51:25.422: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 27 18:51:25.422: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 27 18:51:25.422: INFO: update-demo-nautilus-2gmql is verified up and running
Jun 27 18:51:25.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f5mxr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bz5rq'
Jun 27 18:51:25.505: INFO: stderr: ""
Jun 27 18:51:25.505: INFO: stdout: "true"
Jun 27 18:51:25.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f5mxr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bz5rq'
Jun 27 18:51:25.605: INFO: stderr: ""
Jun 27 18:51:25.605: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 27 18:51:25.605: INFO: validating pod update-demo-nautilus-f5mxr
Jun 27 18:51:25.611: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 27 18:51:25.611: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 27 18:51:25.611: INFO: update-demo-nautilus-f5mxr is verified up and running
STEP: using delete to clean up resources
Jun 27 18:51:25.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bz5rq'
Jun 27 18:51:25.732: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 27 18:51:25.732: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jun 27 18:51:25.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-bz5rq'
Jun 27 18:51:25.863: INFO: stderr: "No resources found.\n"
Jun 27 18:51:25.863: INFO: stdout: ""
Jun 27 18:51:25.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-bz5rq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jun 27 18:51:25.946: INFO: stderr: ""
Jun 27 18:51:25.947: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:51:25.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bz5rq" for this suite.
Jun 27 18:51:47.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:51:47.992: INFO: namespace: e2e-tests-kubectl-bz5rq, resource: bindings, ignored listing per whitelist
Jun 27 18:51:48.060: INFO: namespace e2e-tests-kubectl-bz5rq deletion completed in 22.110662753s

• [SLOW TEST:29.218 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:51:48.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-dk768/secret-test-9e0ae08b-990c-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 18:51:48.214: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e10b95e-990c-11e9-8fa9-0242ac110005" in namespace "e2e-tests-secrets-dk768" to be "success or failure"
Jun 27 18:51:48.217: INFO: Pod "pod-configmaps-9e10b95e-990c-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.186581ms
Jun 27 18:51:50.223: INFO: Pod "pod-configmaps-9e10b95e-990c-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008593879s
Jun 27 18:51:52.228: INFO: Pod "pod-configmaps-9e10b95e-990c-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014078713s
STEP: Saw pod success
Jun 27 18:51:52.228: INFO: Pod "pod-configmaps-9e10b95e-990c-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:51:52.232: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-9e10b95e-990c-11e9-8fa9-0242ac110005 container env-test: 
STEP: delete the pod
Jun 27 18:51:52.293: INFO: Waiting for pod pod-configmaps-9e10b95e-990c-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:51:52.296: INFO: Pod pod-configmaps-9e10b95e-990c-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:51:52.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dk768" for this suite.
Jun 27 18:51:58.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:51:58.388: INFO: namespace: e2e-tests-secrets-dk768, resource: bindings, ignored listing per whitelist
Jun 27 18:51:58.392: INFO: namespace e2e-tests-secrets-dk768 deletion completed in 6.092630128s

• [SLOW TEST:10.332 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:51:58.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jun 27 18:52:02.689: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-a44b61f4-990c-11e9-8fa9-0242ac110005,GenerateName:,Namespace:e2e-tests-events-5lfrj,SelfLink:/api/v1/namespaces/e2e-tests-events-5lfrj/pods/send-events-a44b61f4-990c-11e9-8fa9-0242ac110005,UID:a44e0d3b-990c-11e9-a678-fa163e0cec1d,ResourceVersion:1376163,Generation:0,CreationTimestamp:2019-06-27 18:51:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 657188003,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zdj8m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zdj8m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-zdj8m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000eee020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000eee040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:51:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:52:01 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:52:01 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:51:58 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:10.32.0.4,StartTime:2019-06-27 18:51:58 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-06-27 18:52:01 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://9a5e1ea4aa11d6731c241abae365e0f193b3d765248a33e9500398b3e1bf3c3b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jun 27 18:52:04.693: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jun 27 18:52:06.698: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:52:06.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-5lfrj" for this suite.
Jun 27 18:52:46.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:52:46.805: INFO: namespace: e2e-tests-events-5lfrj, resource: bindings, ignored listing per whitelist
Jun 27 18:52:46.915: INFO: namespace e2e-tests-events-5lfrj deletion completed in 40.174633226s

• [SLOW TEST:48.522 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:52:46.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:52:51.272: INFO: Waiting up to 5m0s for pod "client-envvars-c3a674db-990c-11e9-8fa9-0242ac110005" in namespace "e2e-tests-pods-l4ptp" to be "success or failure"
Jun 27 18:52:51.279: INFO: Pod "client-envvars-c3a674db-990c-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.411543ms
Jun 27 18:52:53.284: INFO: Pod "client-envvars-c3a674db-990c-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012110751s
Jun 27 18:52:55.297: INFO: Pod "client-envvars-c3a674db-990c-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025246817s
STEP: Saw pod success
Jun 27 18:52:55.297: INFO: Pod "client-envvars-c3a674db-990c-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:52:55.300: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod client-envvars-c3a674db-990c-11e9-8fa9-0242ac110005 container env3cont: 
STEP: delete the pod
Jun 27 18:52:55.370: INFO: Waiting for pod client-envvars-c3a674db-990c-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:52:55.458: INFO: Pod client-envvars-c3a674db-990c-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:52:55.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-l4ptp" for this suite.
Jun 27 18:53:37.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:53:37.549: INFO: namespace: e2e-tests-pods-l4ptp, resource: bindings, ignored listing per whitelist
Jun 27 18:53:37.624: INFO: namespace e2e-tests-pods-l4ptp deletion completed in 42.16215311s

• [SLOW TEST:50.709 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:53:37.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-df5874af-990c-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 18:53:37.808: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-df626e9d-990c-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-lzmdq" to be "success or failure"
Jun 27 18:53:37.817: INFO: Pod "pod-projected-secrets-df626e9d-990c-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.115768ms
Jun 27 18:53:39.822: INFO: Pod "pod-projected-secrets-df626e9d-990c-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013739111s
Jun 27 18:53:41.828: INFO: Pod "pod-projected-secrets-df626e9d-990c-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019666283s
STEP: Saw pod success
Jun 27 18:53:41.828: INFO: Pod "pod-projected-secrets-df626e9d-990c-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:53:41.836: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-secrets-df626e9d-990c-11e9-8fa9-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jun 27 18:53:41.884: INFO: Waiting for pod pod-projected-secrets-df626e9d-990c-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:53:41.899: INFO: Pod pod-projected-secrets-df626e9d-990c-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:53:41.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lzmdq" for this suite.
Jun 27 18:53:47.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:53:47.996: INFO: namespace: e2e-tests-projected-lzmdq, resource: bindings, ignored listing per whitelist
Jun 27 18:53:48.076: INFO: namespace e2e-tests-projected-lzmdq deletion completed in 6.172955158s

• [SLOW TEST:10.452 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:53:48.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-v5sp9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v5sp9 to expose endpoints map[]
Jun 27 18:53:48.229: INFO: Get endpoints failed (3.668554ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jun 27 18:53:49.232: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v5sp9 exposes endpoints map[] (1.006637384s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-v5sp9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v5sp9 to expose endpoints map[pod1:[80]]
Jun 27 18:53:52.305: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v5sp9 exposes endpoints map[pod1:[80]] (3.0640948s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-v5sp9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v5sp9 to expose endpoints map[pod1:[80] pod2:[80]]
Jun 27 18:53:55.402: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v5sp9 exposes endpoints map[pod1:[80] pod2:[80]] (3.087645397s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-v5sp9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v5sp9 to expose endpoints map[pod2:[80]]
Jun 27 18:53:55.435: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v5sp9 exposes endpoints map[pod2:[80]] (19.683078ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-v5sp9
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v5sp9 to expose endpoints map[]
Jun 27 18:53:55.726: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v5sp9 exposes endpoints map[] (261.069075ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:53:55.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-v5sp9" for this suite.
Jun 27 18:54:17.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:54:17.918: INFO: namespace: e2e-tests-services-v5sp9, resource: bindings, ignored listing per whitelist
Jun 27 18:54:18.025: INFO: namespace e2e-tests-services-v5sp9 deletion completed in 22.159174673s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:29.949 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:54:18.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:54:18.445: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f77abb8a-990c-11e9-a678-fa163e0cec1d", Controller:(*bool)(0xc001efc062), BlockOwnerDeletion:(*bool)(0xc001efc063)}}
Jun 27 18:54:18.451: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f777f9a4-990c-11e9-a678-fa163e0cec1d", Controller:(*bool)(0xc0019cf8e2), BlockOwnerDeletion:(*bool)(0xc0019cf8e3)}}
Jun 27 18:54:18.495: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f778ceaa-990c-11e9-a678-fa163e0cec1d", Controller:(*bool)(0xc00213c032), BlockOwnerDeletion:(*bool)(0xc00213c033)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:54:23.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-xq7vv" for this suite.
Jun 27 18:54:29.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:54:29.735: INFO: namespace: e2e-tests-gc-xq7vv, resource: bindings, ignored listing per whitelist
Jun 27 18:54:29.810: INFO: namespace e2e-tests-gc-xq7vv deletion completed in 6.206695933s

• [SLOW TEST:11.784 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:54:29.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jun 27 18:54:29.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s68pt'
Jun 27 18:54:30.164: INFO: stderr: ""
Jun 27 18:54:30.164: INFO: stdout: "pod/pause created\n"
Jun 27 18:54:30.164: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jun 27 18:54:30.164: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-s68pt" to be "running and ready"
Jun 27 18:54:30.183: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 19.361582ms
Jun 27 18:54:32.238: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074201783s
Jun 27 18:54:34.243: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.0786969s
Jun 27 18:54:34.243: INFO: Pod "pause" satisfied condition "running and ready"
Jun 27 18:54:34.243: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jun 27 18:54:34.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-s68pt'
Jun 27 18:54:34.320: INFO: stderr: ""
Jun 27 18:54:34.321: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jun 27 18:54:34.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-s68pt'
Jun 27 18:54:34.396: INFO: stderr: ""
Jun 27 18:54:34.396: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jun 27 18:54:34.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-s68pt'
Jun 27 18:54:34.476: INFO: stderr: ""
Jun 27 18:54:34.476: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jun 27 18:54:34.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-s68pt'
Jun 27 18:54:34.552: INFO: stderr: ""
Jun 27 18:54:34.552: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jun 27 18:54:34.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s68pt'
Jun 27 18:54:34.643: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 27 18:54:34.643: INFO: stdout: "pod \"pause\" force deleted\n"
Jun 27 18:54:34.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-s68pt'
Jun 27 18:54:34.718: INFO: stderr: "No resources found.\n"
Jun 27 18:54:34.718: INFO: stdout: ""
Jun 27 18:54:34.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-s68pt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jun 27 18:54:34.777: INFO: stderr: ""
Jun 27 18:54:34.777: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:54:34.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-s68pt" for this suite.
Jun 27 18:54:40.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:54:40.876: INFO: namespace: e2e-tests-kubectl-s68pt, resource: bindings, ignored listing per whitelist
Jun 27 18:54:40.995: INFO: namespace e2e-tests-kubectl-s68pt deletion completed in 6.214714489s

• [SLOW TEST:11.184 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:54:40.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0627 18:55:12.034875       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 27 18:55:12.034: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:55:12.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-xc65l" for this suite.
Jun 27 18:55:18.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:55:18.147: INFO: namespace: e2e-tests-gc-xc65l, resource: bindings, ignored listing per whitelist
Jun 27 18:55:18.163: INFO: namespace e2e-tests-gc-xc65l deletion completed in 6.125962404s

• [SLOW TEST:37.168 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:55:18.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jun 27 18:55:18.307: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jun 27 18:55:18.315: INFO: Waiting for terminating namespaces to be deleted...
Jun 27 18:55:18.319: INFO: 
Logging pods the kubelet thinks is on node hunter-server-x6tdbol33slm before test
Jun 27 18:55:18.329: INFO: kube-proxy-ww64l from kube-system started at 2019-06-16 12:55:34 +0000 UTC (1 container statuses recorded)
Jun 27 18:55:18.329: INFO: 	Container kube-proxy ready: true, restart count 0
Jun 27 18:55:18.329: INFO: etcd-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jun 27 18:55:18.329: INFO: kube-controller-manager-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jun 27 18:55:18.329: INFO: kube-apiserver-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jun 27 18:55:18.329: INFO: weave-net-z4vkv from kube-system started at 2019-06-16 12:55:36 +0000 UTC (2 container statuses recorded)
Jun 27 18:55:18.329: INFO: 	Container weave ready: true, restart count 0
Jun 27 18:55:18.329: INFO: 	Container weave-npc ready: true, restart count 0
Jun 27 18:55:18.329: INFO: kube-scheduler-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jun 27 18:55:18.329: INFO: coredns-86c58d9df4-99n2k from kube-system started at 2019-06-16 12:56:01 +0000 UTC (1 container statuses recorded)
Jun 27 18:55:18.329: INFO: 	Container coredns ready: true, restart count 0
Jun 27 18:55:18.329: INFO: coredns-86c58d9df4-zdm4x from kube-system started at 2019-06-16 12:56:01 +0000 UTC (1 container statuses recorded)
Jun 27 18:55:18.329: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ac230f0e1d4a89], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:55:19.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-b2fv6" for this suite.
Jun 27 18:55:25.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:55:25.517: INFO: namespace: e2e-tests-sched-pred-b2fv6, resource: bindings, ignored listing per whitelist
Jun 27 18:55:25.575: INFO: namespace e2e-tests-sched-pred-b2fv6 deletion completed in 6.214411011s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.412 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:55:25.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jun 27 18:55:25.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-djcst'
Jun 27 18:55:25.796: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jun 27 18:55:25.796: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jun 27 18:55:25.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-djcst'
Jun 27 18:55:25.933: INFO: stderr: ""
Jun 27 18:55:25.933: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:55:25.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-djcst" for this suite.
Jun 27 18:55:39.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:55:40.055: INFO: namespace: e2e-tests-kubectl-djcst, resource: bindings, ignored listing per whitelist
Jun 27 18:55:40.078: INFO: namespace e2e-tests-kubectl-djcst deletion completed in 14.113701669s

• [SLOW TEST:14.503 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:55:40.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 18:55:40.200: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2856844c-990d-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-4qnqj" to be "success or failure"
Jun 27 18:55:40.212: INFO: Pod "downwardapi-volume-2856844c-990d-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.528471ms
Jun 27 18:55:42.215: INFO: Pod "downwardapi-volume-2856844c-990d-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015625121s
Jun 27 18:55:44.220: INFO: Pod "downwardapi-volume-2856844c-990d-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020657099s
STEP: Saw pod success
Jun 27 18:55:44.220: INFO: Pod "downwardapi-volume-2856844c-990d-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:55:44.224: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-2856844c-990d-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 18:55:44.254: INFO: Waiting for pod downwardapi-volume-2856844c-990d-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:55:44.259: INFO: Pod downwardapi-volume-2856844c-990d-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:55:44.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4qnqj" for this suite.
Jun 27 18:55:50.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:55:50.304: INFO: namespace: e2e-tests-projected-4qnqj, resource: bindings, ignored listing per whitelist
Jun 27 18:55:50.408: INFO: namespace e2e-tests-projected-4qnqj deletion completed in 6.144139366s

• [SLOW TEST:10.330 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:55:50.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jun 27 18:55:50.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-58fww'
Jun 27 18:55:50.633: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jun 27 18:55:50.633: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jun 27 18:55:50.650: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-98f6g]
Jun 27 18:55:50.651: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-98f6g" in namespace "e2e-tests-kubectl-58fww" to be "running and ready"
Jun 27 18:55:50.677: INFO: Pod "e2e-test-nginx-rc-98f6g": Phase="Pending", Reason="", readiness=false. Elapsed: 26.724744ms
Jun 27 18:55:52.683: INFO: Pod "e2e-test-nginx-rc-98f6g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032062072s
Jun 27 18:55:54.688: INFO: Pod "e2e-test-nginx-rc-98f6g": Phase="Running", Reason="", readiness=true. Elapsed: 4.03719407s
Jun 27 18:55:54.688: INFO: Pod "e2e-test-nginx-rc-98f6g" satisfied condition "running and ready"
Jun 27 18:55:54.688: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-98f6g]
Jun 27 18:55:54.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-58fww'
Jun 27 18:55:54.829: INFO: stderr: ""
Jun 27 18:55:54.829: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jun 27 18:55:54.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-58fww'
Jun 27 18:55:55.092: INFO: stderr: ""
Jun 27 18:55:55.092: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:55:55.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-58fww" for this suite.
Jun 27 18:56:17.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:56:17.228: INFO: namespace: e2e-tests-kubectl-58fww, resource: bindings, ignored listing per whitelist
Jun 27 18:56:17.258: INFO: namespace e2e-tests-kubectl-58fww deletion completed in 22.156473724s

• [SLOW TEST:26.850 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:56:17.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 18:56:17.368: INFO: Creating deployment "test-recreate-deployment"
Jun 27 18:56:17.372: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jun 27 18:56:17.397: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jun 27 18:56:19.431: INFO: Waiting deployment "test-recreate-deployment" to complete
Jun 27 18:56:19.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697258577, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697258577, loc:(*time.Location)(0x7947a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63697258577, loc:(*time.Location)(0x7947a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697258577, loc:(*time.Location)(0x7947a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5dfdcc846d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 27 18:56:21.439: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jun 27 18:56:21.451: INFO: Updating deployment test-recreate-deployment
Jun 27 18:56:21.451: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jun 27 18:56:22.049: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-bpczc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bpczc/deployments/test-recreate-deployment,UID:3e81ae09-990d-11e9-a678-fa163e0cec1d,ResourceVersion:1376881,Generation:2,CreationTimestamp:2019-06-27 18:56:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-06-27 18:56:21 +0000 UTC 2019-06-27 18:56:21 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-06-27 18:56:21 +0000 UTC 2019-06-27 18:56:17 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-697fbf54bf" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jun 27 18:56:22.053: INFO: New ReplicaSet "test-recreate-deployment-697fbf54bf" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-697fbf54bf,GenerateName:,Namespace:e2e-tests-deployment-bpczc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bpczc/replicasets/test-recreate-deployment-697fbf54bf,UID:411aae96-990d-11e9-a678-fa163e0cec1d,ResourceVersion:1376880,Generation:1,CreationTimestamp:2019-06-27 18:56:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3e81ae09-990d-11e9-a678-fa163e0cec1d 0xc0026caa07 0xc0026caa08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jun 27 18:56:22.053: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jun 27 18:56:22.053: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5dfdcc846d,GenerateName:,Namespace:e2e-tests-deployment-bpczc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bpczc/replicasets/test-recreate-deployment-5dfdcc846d,UID:3e8625ac-990d-11e9-a678-fa163e0cec1d,ResourceVersion:1376872,Generation:2,CreationTimestamp:2019-06-27 18:56:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5dfdcc846d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 3e81ae09-990d-11e9-a678-fa163e0cec1d 0xc0026ca947 0xc0026ca948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5dfdcc846d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5dfdcc846d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jun 27 18:56:22.072: INFO: Pod "test-recreate-deployment-697fbf54bf-qmbc9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-697fbf54bf-qmbc9,GenerateName:test-recreate-deployment-697fbf54bf-,Namespace:e2e-tests-deployment-bpczc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bpczc/pods/test-recreate-deployment-697fbf54bf-qmbc9,UID:411b63db-990d-11e9-a678-fa163e0cec1d,ResourceVersion:1376883,Generation:0,CreationTimestamp:2019-06-27 18:56:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-697fbf54bf 411aae96-990d-11e9-a678-fa163e0cec1d 0xc0026cb257 0xc0026cb258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cwcrh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cwcrh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cwcrh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-x6tdbol33slm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cb2c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cb2e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:56:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:56:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:56:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 18:56:21 +0000 UTC  }],Message:,Reason:,HostIP:192.168.100.12,PodIP:,StartTime:2019-06-27 18:56:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:56:22.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-bpczc" for this suite.
Jun 27 18:56:30.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:56:30.114: INFO: namespace: e2e-tests-deployment-bpczc, resource: bindings, ignored listing per whitelist
Jun 27 18:56:30.174: INFO: namespace e2e-tests-deployment-bpczc deletion completed in 8.097500578s

• [SLOW TEST:12.915 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:56:30.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-gptff
Jun 27 18:56:34.422: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-gptff
STEP: checking the pod's current state and verifying that restartCount is present
Jun 27 18:56:34.425: INFO: Initial restart count of pod liveness-http is 0
Jun 27 18:56:47.198: INFO: Restart count of pod e2e-tests-container-probe-gptff/liveness-http is now 1 (12.772984401s elapsed)
Jun 27 18:57:05.240: INFO: Restart count of pod e2e-tests-container-probe-gptff/liveness-http is now 2 (30.815028855s elapsed)
Jun 27 18:57:27.464: INFO: Restart count of pod e2e-tests-container-probe-gptff/liveness-http is now 3 (53.038753286s elapsed)
Jun 27 18:57:45.509: INFO: Restart count of pod e2e-tests-container-probe-gptff/liveness-http is now 4 (1m11.083548265s elapsed)
Jun 27 18:58:57.718: INFO: Restart count of pod e2e-tests-container-probe-gptff/liveness-http is now 5 (2m23.292588235s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:58:57.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-gptff" for this suite.
Jun 27 18:59:03.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:59:03.798: INFO: namespace: e2e-tests-container-probe-gptff, resource: bindings, ignored listing per whitelist
Jun 27 18:59:03.854: INFO: namespace e2e-tests-container-probe-gptff deletion completed in 6.084575304s

• [SLOW TEST:153.680 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:59:03.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-a1c8aa41-990d-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 18:59:04.031: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a1c930c5-990d-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-gvg7k" to be "success or failure"
Jun 27 18:59:04.077: INFO: Pod "pod-projected-secrets-a1c930c5-990d-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.068243ms
Jun 27 18:59:06.081: INFO: Pod "pod-projected-secrets-a1c930c5-990d-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050479065s
Jun 27 18:59:08.086: INFO: Pod "pod-projected-secrets-a1c930c5-990d-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055426955s
STEP: Saw pod success
Jun 27 18:59:08.086: INFO: Pod "pod-projected-secrets-a1c930c5-990d-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:59:08.090: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-secrets-a1c930c5-990d-11e9-8fa9-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jun 27 18:59:08.152: INFO: Waiting for pod pod-projected-secrets-a1c930c5-990d-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:59:08.160: INFO: Pod pod-projected-secrets-a1c930c5-990d-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:59:08.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gvg7k" for this suite.
Jun 27 18:59:14.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:59:14.237: INFO: namespace: e2e-tests-projected-gvg7k, resource: bindings, ignored listing per whitelist
Jun 27 18:59:14.325: INFO: namespace e2e-tests-projected-gvg7k deletion completed in 6.15990788s

• [SLOW TEST:10.471 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:59:14.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-a8178e27-990d-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 18:59:14.544: INFO: Waiting up to 5m0s for pod "pod-configmaps-a819767d-990d-11e9-8fa9-0242ac110005" in namespace "e2e-tests-configmap-s7gnn" to be "success or failure"
Jun 27 18:59:14.644: INFO: Pod "pod-configmaps-a819767d-990d-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 99.614558ms
Jun 27 18:59:16.740: INFO: Pod "pod-configmaps-a819767d-990d-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196080818s
Jun 27 18:59:18.747: INFO: Pod "pod-configmaps-a819767d-990d-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.202562872s
STEP: Saw pod success
Jun 27 18:59:18.747: INFO: Pod "pod-configmaps-a819767d-990d-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:59:18.757: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-a819767d-990d-11e9-8fa9-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jun 27 18:59:19.039: INFO: Waiting for pod pod-configmaps-a819767d-990d-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:59:19.047: INFO: Pod pod-configmaps-a819767d-990d-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:59:19.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-s7gnn" for this suite.
Jun 27 18:59:25.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:59:25.111: INFO: namespace: e2e-tests-configmap-s7gnn, resource: bindings, ignored listing per whitelist
Jun 27 18:59:25.210: INFO: namespace e2e-tests-configmap-s7gnn deletion completed in 6.159696973s

• [SLOW TEST:10.885 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:59:25.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jun 27 18:59:25.343: INFO: Waiting up to 5m0s for pod "client-containers-ae890de5-990d-11e9-8fa9-0242ac110005" in namespace "e2e-tests-containers-c6h97" to be "success or failure"
Jun 27 18:59:25.346: INFO: Pod "client-containers-ae890de5-990d-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.442367ms
Jun 27 18:59:27.351: INFO: Pod "client-containers-ae890de5-990d-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008364902s
Jun 27 18:59:29.357: INFO: Pod "client-containers-ae890de5-990d-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013891318s
STEP: Saw pod success
Jun 27 18:59:29.357: INFO: Pod "client-containers-ae890de5-990d-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 18:59:29.360: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod client-containers-ae890de5-990d-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 18:59:29.388: INFO: Waiting for pod client-containers-ae890de5-990d-11e9-8fa9-0242ac110005 to disappear
Jun 27 18:59:29.394: INFO: Pod client-containers-ae890de5-990d-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:59:29.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-c6h97" for this suite.
Jun 27 18:59:35.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 18:59:35.492: INFO: namespace: e2e-tests-containers-c6h97, resource: bindings, ignored listing per whitelist
Jun 27 18:59:35.529: INFO: namespace e2e-tests-containers-c6h97 deletion completed in 6.130159738s

• [SLOW TEST:10.318 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 18:59:35.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jun 27 18:59:35.617: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jun 27 18:59:35.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dtkn7'
Jun 27 18:59:35.980: INFO: stderr: ""
Jun 27 18:59:35.980: INFO: stdout: "service/redis-slave created\n"
Jun 27 18:59:35.980: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jun 27 18:59:35.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dtkn7'
Jun 27 18:59:36.347: INFO: stderr: ""
Jun 27 18:59:36.347: INFO: stdout: "service/redis-master created\n"
Jun 27 18:59:36.347: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jun 27 18:59:36.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dtkn7'
Jun 27 18:59:36.730: INFO: stderr: ""
Jun 27 18:59:36.730: INFO: stdout: "service/frontend created\n"
Jun 27 18:59:36.730: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jun 27 18:59:36.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dtkn7'
Jun 27 18:59:37.027: INFO: stderr: ""
Jun 27 18:59:37.027: INFO: stdout: "deployment.extensions/frontend created\n"
Jun 27 18:59:37.027: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jun 27 18:59:37.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dtkn7'
Jun 27 18:59:37.336: INFO: stderr: ""
Jun 27 18:59:37.336: INFO: stdout: "deployment.extensions/redis-master created\n"
Jun 27 18:59:37.336: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jun 27 18:59:37.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dtkn7'
Jun 27 18:59:37.731: INFO: stderr: ""
Jun 27 18:59:37.731: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jun 27 18:59:37.731: INFO: Waiting for all frontend pods to be Running.
Jun 27 18:59:47.781: INFO: Waiting for frontend to serve content.
Jun 27 18:59:48.720: INFO: Trying to add a new entry to the guestbook.
Jun 27 18:59:48.764: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jun 27 18:59:48.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dtkn7'
Jun 27 18:59:50.996: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 27 18:59:50.996: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jun 27 18:59:50.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dtkn7'
Jun 27 18:59:51.327: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 27 18:59:51.327: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jun 27 18:59:51.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dtkn7'
Jun 27 18:59:51.720: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 27 18:59:51.720: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jun 27 18:59:51.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dtkn7'
Jun 27 18:59:51.936: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 27 18:59:51.936: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jun 27 18:59:51.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dtkn7'
Jun 27 18:59:52.034: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 27 18:59:52.034: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jun 27 18:59:52.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dtkn7'
Jun 27 18:59:52.220: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 27 18:59:52.220: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 18:59:52.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dtkn7" for this suite.
Jun 27 19:00:38.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:00:38.447: INFO: namespace: e2e-tests-kubectl-dtkn7, resource: bindings, ignored listing per whitelist
Jun 27 19:00:38.554: INFO: namespace e2e-tests-kubectl-dtkn7 deletion completed in 46.284332114s

• [SLOW TEST:63.025 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:00:38.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jun 27 19:00:39.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:39.316: INFO: stderr: ""
Jun 27 19:00:39.316: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 27 19:00:39.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:39.474: INFO: stderr: ""
Jun 27 19:00:39.474: INFO: stdout: "update-demo-nautilus-cqhk7 update-demo-nautilus-zcnt9 "
Jun 27 19:00:39.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqhk7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:39.564: INFO: stderr: ""
Jun 27 19:00:39.564: INFO: stdout: ""
Jun 27 19:00:39.564: INFO: update-demo-nautilus-cqhk7 is created but not running
Jun 27 19:00:44.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:44.668: INFO: stderr: ""
Jun 27 19:00:44.668: INFO: stdout: "update-demo-nautilus-cqhk7 update-demo-nautilus-zcnt9 "
Jun 27 19:00:44.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqhk7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:44.751: INFO: stderr: ""
Jun 27 19:00:44.751: INFO: stdout: "true"
Jun 27 19:00:44.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqhk7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:44.829: INFO: stderr: ""
Jun 27 19:00:44.829: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 27 19:00:44.829: INFO: validating pod update-demo-nautilus-cqhk7
Jun 27 19:00:44.833: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 27 19:00:44.833: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 27 19:00:44.833: INFO: update-demo-nautilus-cqhk7 is verified up and running
Jun 27 19:00:44.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zcnt9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:44.904: INFO: stderr: ""
Jun 27 19:00:44.904: INFO: stdout: "true"
Jun 27 19:00:44.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zcnt9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:44.979: INFO: stderr: ""
Jun 27 19:00:44.979: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 27 19:00:44.979: INFO: validating pod update-demo-nautilus-zcnt9
Jun 27 19:00:44.983: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 27 19:00:44.983: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 27 19:00:44.983: INFO: update-demo-nautilus-zcnt9 is verified up and running
STEP: scaling down the replication controller
Jun 27 19:00:44.984: INFO: scanned /root for discovery docs: 
Jun 27 19:00:44.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:46.209: INFO: stderr: ""
Jun 27 19:00:46.209: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 27 19:00:46.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:46.286: INFO: stderr: ""
Jun 27 19:00:46.286: INFO: stdout: "update-demo-nautilus-cqhk7 update-demo-nautilus-zcnt9 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jun 27 19:00:51.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:51.433: INFO: stderr: ""
Jun 27 19:00:51.433: INFO: stdout: "update-demo-nautilus-cqhk7 update-demo-nautilus-zcnt9 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jun 27 19:00:56.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:56.575: INFO: stderr: ""
Jun 27 19:00:56.575: INFO: stdout: "update-demo-nautilus-cqhk7 "
Jun 27 19:00:56.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqhk7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:56.703: INFO: stderr: ""
Jun 27 19:00:56.703: INFO: stdout: "true"
Jun 27 19:00:56.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqhk7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:56.808: INFO: stderr: ""
Jun 27 19:00:56.808: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 27 19:00:56.808: INFO: validating pod update-demo-nautilus-cqhk7
Jun 27 19:00:56.812: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 27 19:00:56.812: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 27 19:00:56.812: INFO: update-demo-nautilus-cqhk7 is verified up and running
STEP: scaling up the replication controller
Jun 27 19:00:56.814: INFO: scanned /root for discovery docs: 
Jun 27 19:00:56.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:57.952: INFO: stderr: ""
Jun 27 19:00:57.952: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 27 19:00:57.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:58.051: INFO: stderr: ""
Jun 27 19:00:58.051: INFO: stdout: "update-demo-nautilus-cqhk7 update-demo-nautilus-j7ml7 "
Jun 27 19:00:58.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqhk7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:58.128: INFO: stderr: ""
Jun 27 19:00:58.128: INFO: stdout: "true"
Jun 27 19:00:58.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqhk7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:58.210: INFO: stderr: ""
Jun 27 19:00:58.210: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 27 19:00:58.210: INFO: validating pod update-demo-nautilus-cqhk7
Jun 27 19:00:58.222: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 27 19:00:58.222: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 27 19:00:58.222: INFO: update-demo-nautilus-cqhk7 is verified up and running
Jun 27 19:00:58.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j7ml7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:00:58.298: INFO: stderr: ""
Jun 27 19:00:58.298: INFO: stdout: ""
Jun 27 19:00:58.298: INFO: update-demo-nautilus-j7ml7 is created but not running
Jun 27 19:01:03.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:01:03.376: INFO: stderr: ""
Jun 27 19:01:03.376: INFO: stdout: "update-demo-nautilus-cqhk7 update-demo-nautilus-j7ml7 "
Jun 27 19:01:03.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqhk7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:01:03.446: INFO: stderr: ""
Jun 27 19:01:03.446: INFO: stdout: "true"
Jun 27 19:01:03.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqhk7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:01:03.512: INFO: stderr: ""
Jun 27 19:01:03.512: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 27 19:01:03.512: INFO: validating pod update-demo-nautilus-cqhk7
Jun 27 19:01:03.515: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 27 19:01:03.515: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 27 19:01:03.515: INFO: update-demo-nautilus-cqhk7 is verified up and running
Jun 27 19:01:03.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j7ml7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:01:03.583: INFO: stderr: ""
Jun 27 19:01:03.583: INFO: stdout: "true"
Jun 27 19:01:03.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j7ml7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:01:03.685: INFO: stderr: ""
Jun 27 19:01:03.685: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jun 27 19:01:03.685: INFO: validating pod update-demo-nautilus-j7ml7
Jun 27 19:01:03.689: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 27 19:01:03.689: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 27 19:01:03.689: INFO: update-demo-nautilus-j7ml7 is verified up and running
STEP: using delete to clean up resources
Jun 27 19:01:03.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:01:03.767: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 27 19:01:03.767: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jun 27 19:01:03.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-9xlxr'
Jun 27 19:01:03.889: INFO: stderr: "No resources found.\n"
Jun 27 19:01:03.889: INFO: stdout: ""
Jun 27 19:01:03.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-9xlxr -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jun 27 19:01:03.960: INFO: stderr: ""
Jun 27 19:01:03.960: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:01:03.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9xlxr" for this suite.
Jun 27 19:01:26.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:01:26.239: INFO: namespace: e2e-tests-kubectl-9xlxr, resource: bindings, ignored listing per whitelist
Jun 27 19:01:26.285: INFO: namespace e2e-tests-kubectl-9xlxr deletion completed in 22.322483532s

• [SLOW TEST:47.731 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:01:26.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jun 27 19:01:26.382: INFO: namespace e2e-tests-kubectl-b7t6h
Jun 27 19:01:26.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b7t6h'
Jun 27 19:01:26.588: INFO: stderr: ""
Jun 27 19:01:26.588: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jun 27 19:01:27.613: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:01:27.613: INFO: Found 0 / 1
Jun 27 19:01:28.594: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:01:28.594: INFO: Found 0 / 1
Jun 27 19:01:29.706: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:01:29.706: INFO: Found 0 / 1
Jun 27 19:01:30.594: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:01:30.594: INFO: Found 0 / 1
Jun 27 19:01:31.596: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:01:31.596: INFO: Found 1 / 1
Jun 27 19:01:31.596: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jun 27 19:01:31.601: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:01:31.601: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun 27 19:01:31.601: INFO: wait on redis-master startup in e2e-tests-kubectl-b7t6h 
Jun 27 19:01:31.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lcjms redis-master --namespace=e2e-tests-kubectl-b7t6h'
Jun 27 19:01:31.799: INFO: stderr: ""
Jun 27 19:01:31.799: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 27 Jun 19:01:29.537 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Jun 19:01:29.538 # Server started, Redis version 3.2.12\n1:M 27 Jun 19:01:29.539 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Jun 19:01:29.539 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jun 27 19:01:31.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-b7t6h'
Jun 27 19:01:31.970: INFO: stderr: ""
Jun 27 19:01:31.970: INFO: stdout: "service/rm2 exposed\n"
Jun 27 19:01:32.054: INFO: Service rm2 in namespace e2e-tests-kubectl-b7t6h found.
STEP: exposing service
Jun 27 19:01:34.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-b7t6h'
Jun 27 19:01:34.455: INFO: stderr: ""
Jun 27 19:01:34.455: INFO: stdout: "service/rm3 exposed\n"
Jun 27 19:01:34.464: INFO: Service rm3 in namespace e2e-tests-kubectl-b7t6h found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:01:36.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-b7t6h" for this suite.
Jun 27 19:02:00.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:02:00.576: INFO: namespace: e2e-tests-kubectl-b7t6h, resource: bindings, ignored listing per whitelist
Jun 27 19:02:00.606: INFO: namespace e2e-tests-kubectl-b7t6h deletion completed in 24.129521368s

• [SLOW TEST:34.321 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:02:00.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-mjstx/configmap-test-0b290140-990e-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 19:02:00.766: INFO: Waiting up to 5m0s for pod "pod-configmaps-0b2a7bea-990e-11e9-8fa9-0242ac110005" in namespace "e2e-tests-configmap-mjstx" to be "success or failure"
Jun 27 19:02:00.772: INFO: Pod "pod-configmaps-0b2a7bea-990e-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.903214ms
Jun 27 19:02:02.777: INFO: Pod "pod-configmaps-0b2a7bea-990e-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010736739s
Jun 27 19:02:04.780: INFO: Pod "pod-configmaps-0b2a7bea-990e-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013563229s
Jun 27 19:02:06.788: INFO: Pod "pod-configmaps-0b2a7bea-990e-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021967337s
STEP: Saw pod success
Jun 27 19:02:06.788: INFO: Pod "pod-configmaps-0b2a7bea-990e-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:02:06.793: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-0b2a7bea-990e-11e9-8fa9-0242ac110005 container env-test: 
STEP: delete the pod
Jun 27 19:02:06.847: INFO: Waiting for pod pod-configmaps-0b2a7bea-990e-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:02:06.902: INFO: Pod pod-configmaps-0b2a7bea-990e-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:02:06.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mjstx" for this suite.
Jun 27 19:02:12.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:02:12.983: INFO: namespace: e2e-tests-configmap-mjstx, resource: bindings, ignored listing per whitelist
Jun 27 19:02:13.053: INFO: namespace e2e-tests-configmap-mjstx deletion completed in 6.146938197s

• [SLOW TEST:12.447 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:02:13.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 19:02:13.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jun 27 19:02:13.219: INFO: stderr: ""
Jun 27 19:02:13.219: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.7\", GitCommit:\"4683545293d792934a7a7e12f2cc47d20b2dd01b\", GitTreeState:\"clean\", BuildDate:\"2019-06-27T17:43:29Z\", GoVersion:\"go1.11.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jun 27 19:02:13.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ddjtd'
Jun 27 19:02:13.438: INFO: stderr: ""
Jun 27 19:02:13.438: INFO: stdout: "replicationcontroller/redis-master created\n"
Jun 27 19:02:13.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ddjtd'
Jun 27 19:02:13.810: INFO: stderr: ""
Jun 27 19:02:13.810: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jun 27 19:02:14.814: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:02:14.814: INFO: Found 0 / 1
Jun 27 19:02:15.814: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:02:15.814: INFO: Found 0 / 1
Jun 27 19:02:16.813: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:02:16.813: INFO: Found 1 / 1
Jun 27 19:02:16.813: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jun 27 19:02:16.815: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:02:16.815: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun 27 19:02:16.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-c9smd --namespace=e2e-tests-kubectl-ddjtd'
Jun 27 19:02:16.933: INFO: stderr: ""
Jun 27 19:02:16.933: INFO: stdout: "Name:               redis-master-c9smd\nNamespace:          e2e-tests-kubectl-ddjtd\nPriority:           0\nPriorityClassName:  \nNode:               hunter-server-x6tdbol33slm/192.168.100.12\nStart Time:         Thu, 27 Jun 2019 19:02:13 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        \nStatus:             Running\nIP:                 10.32.0.4\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://da335d6655898731b0cc3b1ed22b276d51a9e813505252d6e6d00a562299e023\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 27 Jun 2019 19:02:15 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s5pcd (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-s5pcd:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-s5pcd\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                                 Message\n  ----    ------     ----  ----                                 -------\n  Normal  Scheduled  3s    default-scheduler                    Successfully assigned e2e-tests-kubectl-ddjtd/redis-master-c9smd to hunter-server-x6tdbol33slm\n  Normal  Pulled     2s    kubelet, hunter-server-x6tdbol33slm  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, hunter-server-x6tdbol33slm  Created container\n  Normal  Started    1s    kubelet, hunter-server-x6tdbol33slm  Started container\n"
Jun 27 19:02:16.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-ddjtd'
Jun 27 19:02:17.044: INFO: stderr: ""
Jun 27 19:02:17.044: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-ddjtd\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: redis-master-c9smd\n"
Jun 27 19:02:17.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-ddjtd'
Jun 27 19:02:17.128: INFO: stderr: ""
Jun 27 19:02:17.128: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-ddjtd\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.107.150.136\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.32.0.4:6379\nSession Affinity:  None\nEvents:            \n"
Jun 27 19:02:17.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-server-x6tdbol33slm'
Jun 27 19:02:17.214: INFO: stderr: ""
Jun 27 19:02:17.214: INFO: stdout: "Name:               hunter-server-x6tdbol33slm\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=hunter-server-x6tdbol33slm\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 16 Jun 2019 12:55:20 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sun, 16 Jun 2019 12:55:48 +0000   Sun, 16 Jun 2019 12:55:48 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Thu, 27 Jun 2019 19:02:07 +0000   Sun, 16 Jun 2019 12:55:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 27 Jun 2019 19:02:07 +0000   Sun, 16 Jun 2019 12:55:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 27 Jun 2019 19:02:07 +0000   Sun, 16 Jun 2019 12:55:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 27 Jun 2019 19:02:07 +0000   Sun, 16 Jun 2019 12:56:00 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  192.168.100.12\n  Hostname:    hunter-server-x6tdbol33slm\nCapacity:\n cpu:                4\n ephemeral-storage:  20263528Ki\n hugepages-2Mi:      0\n memory:             4045928Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18674867374\n hugepages-2Mi:      0\n memory:             3943528Ki\n pods:               110\nSystem Info:\n Machine ID:                 3d8dccd2e2dc43439a8a7bcb64960930\n System UUID:                3D8DCCD2-E2DC-4343-9A8A-7BCB64960930\n Boot ID:                    8456ffa0-d32c-4e2d-b5d0-8d3f937f2a85\n Kernel Version:             4.4.0-142-generic\n OS Image:                   Ubuntu 16.04.6 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.5\n Kubelet Version:            v1.13.7\n Kube-Proxy Version:         v1.13.7\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                                  ------------  ----------  ---------------  -------------  ---\n  e2e-tests-kubectl-ddjtd    redis-master-c9smd                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s\n  kube-system                coredns-86c58d9df4-99n2k                              100m (2%)     0 (0%)      70Mi (1%)        170Mi (4%)     11d\n  kube-system                coredns-86c58d9df4-zdm4x                              100m (2%)     0 (0%)      70Mi (1%)        170Mi (4%)     11d\n  kube-system                etcd-hunter-server-x6tdbol33slm                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                kube-apiserver-hunter-server-x6tdbol33slm             250m (6%)     0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                kube-controller-manager-hunter-server-x6tdbol33slm    200m (5%)     0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                kube-proxy-ww64l                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                kube-scheduler-hunter-server-x6tdbol33slm             100m (2%)     0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                weave-net-z4vkv                                       20m (0%)      0 (0%)      0 (0%)           0 (0%)         11d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                770m (19%)  0 (0%)\n  memory             140Mi (3%)  340Mi (8%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Jun 27 19:02:17.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-ddjtd'
Jun 27 19:02:17.291: INFO: stderr: ""
Jun 27 19:02:17.291: INFO: stdout: "Name:         e2e-tests-kubectl-ddjtd\nLabels:       e2e-framework=kubectl\n              e2e-run=dc35f27e-9903-11e9-8fa9-0242ac110005\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:02:17.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ddjtd" for this suite.
Jun 27 19:02:39.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:02:39.452: INFO: namespace: e2e-tests-kubectl-ddjtd, resource: bindings, ignored listing per whitelist
Jun 27 19:02:39.461: INFO: namespace e2e-tests-kubectl-ddjtd deletion completed in 22.167147903s

• [SLOW TEST:26.408 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:02:39.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-n9qw
STEP: Creating a pod to test atomic-volume-subpath
Jun 27 19:02:39.658: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-n9qw" in namespace "e2e-tests-subpath-hc8hb" to be "success or failure"
Jun 27 19:02:39.673: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Pending", Reason="", readiness=false. Elapsed: 15.783191ms
Jun 27 19:02:41.680: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022418583s
Jun 27 19:02:43.794: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136270972s
Jun 27 19:02:45.797: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139223218s
Jun 27 19:02:47.802: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Running", Reason="", readiness=false. Elapsed: 8.144376567s
Jun 27 19:02:49.807: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Running", Reason="", readiness=false. Elapsed: 10.149408382s
Jun 27 19:02:51.813: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Running", Reason="", readiness=false. Elapsed: 12.155220902s
Jun 27 19:02:53.819: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Running", Reason="", readiness=false. Elapsed: 14.161183113s
Jun 27 19:02:55.825: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Running", Reason="", readiness=false. Elapsed: 16.167807101s
Jun 27 19:02:57.830: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Running", Reason="", readiness=false. Elapsed: 18.172585354s
Jun 27 19:02:59.834: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Running", Reason="", readiness=false. Elapsed: 20.176431004s
Jun 27 19:03:01.839: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Running", Reason="", readiness=false. Elapsed: 22.181067952s
Jun 27 19:03:03.843: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Running", Reason="", readiness=false. Elapsed: 24.184932711s
Jun 27 19:03:05.847: INFO: Pod "pod-subpath-test-secret-n9qw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.188990793s
STEP: Saw pod success
Jun 27 19:03:05.847: INFO: Pod "pod-subpath-test-secret-n9qw" satisfied condition "success or failure"
Jun 27 19:03:05.849: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-subpath-test-secret-n9qw container test-container-subpath-secret-n9qw: 
STEP: delete the pod
Jun 27 19:03:05.919: INFO: Waiting for pod pod-subpath-test-secret-n9qw to disappear
Jun 27 19:03:05.930: INFO: Pod pod-subpath-test-secret-n9qw no longer exists
STEP: Deleting pod pod-subpath-test-secret-n9qw
Jun 27 19:03:05.930: INFO: Deleting pod "pod-subpath-test-secret-n9qw" in namespace "e2e-tests-subpath-hc8hb"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:03:05.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-hc8hb" for this suite.
Jun 27 19:03:11.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:03:12.022: INFO: namespace: e2e-tests-subpath-hc8hb, resource: bindings, ignored listing per whitelist
Jun 27 19:03:12.038: INFO: namespace e2e-tests-subpath-hc8hb deletion completed in 6.102082s

• [SLOW TEST:32.577 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:03:12.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-35bf03d3-990e-11e9-8fa9-0242ac110005
STEP: Creating secret with name s-test-opt-upd-35bf044b-990e-11e9-8fa9-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-35bf03d3-990e-11e9-8fa9-0242ac110005
STEP: Updating secret s-test-opt-upd-35bf044b-990e-11e9-8fa9-0242ac110005
STEP: Creating secret with name s-test-opt-create-35bf046f-990e-11e9-8fa9-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:04:41.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vh99z" for this suite.
Jun 27 19:05:05.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:05:05.883: INFO: namespace: e2e-tests-projected-vh99z, resource: bindings, ignored listing per whitelist
Jun 27 19:05:05.905: INFO: namespace e2e-tests-projected-vh99z deletion completed in 24.272211017s

• [SLOW TEST:113.867 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:05:05.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 19:05:06.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-799f694c-990e-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-4dvk2" to be "success or failure"
Jun 27 19:05:06.074: INFO: Pod "downwardapi-volume-799f694c-990e-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.573151ms
Jun 27 19:05:08.077: INFO: Pod "downwardapi-volume-799f694c-990e-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007884344s
Jun 27 19:05:10.082: INFO: Pod "downwardapi-volume-799f694c-990e-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013103771s
Jun 27 19:05:12.087: INFO: Pod "downwardapi-volume-799f694c-990e-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017651899s
STEP: Saw pod success
Jun 27 19:05:12.087: INFO: Pod "downwardapi-volume-799f694c-990e-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:05:12.090: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-799f694c-990e-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 19:05:12.137: INFO: Waiting for pod downwardapi-volume-799f694c-990e-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:05:12.146: INFO: Pod downwardapi-volume-799f694c-990e-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:05:12.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4dvk2" for this suite.
Jun 27 19:05:18.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:05:18.230: INFO: namespace: e2e-tests-downward-api-4dvk2, resource: bindings, ignored listing per whitelist
Jun 27 19:05:18.256: INFO: namespace e2e-tests-downward-api-4dvk2 deletion completed in 6.106366322s

• [SLOW TEST:12.351 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:05:18.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jun 27 19:05:25.120: INFO: Successfully updated pod "pod-update-8101330e-990e-11e9-8fa9-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jun 27 19:05:25.172: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:05:25.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-zx274" for this suite.
Jun 27 19:05:41.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:05:41.208: INFO: namespace: e2e-tests-pods-zx274, resource: bindings, ignored listing per whitelist
Jun 27 19:05:41.263: INFO: namespace e2e-tests-pods-zx274 deletion completed in 16.086802734s

• [SLOW TEST:23.007 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:05:41.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 19:05:41.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ec1ff34-990e-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-tvj7z" to be "success or failure"
Jun 27 19:05:41.550: INFO: Pod "downwardapi-volume-8ec1ff34-990e-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.862449ms
Jun 27 19:05:43.553: INFO: Pod "downwardapi-volume-8ec1ff34-990e-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018113395s
Jun 27 19:05:45.556: INFO: Pod "downwardapi-volume-8ec1ff34-990e-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020857107s
Jun 27 19:05:47.562: INFO: Pod "downwardapi-volume-8ec1ff34-990e-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027352676s
STEP: Saw pod success
Jun 27 19:05:47.562: INFO: Pod "downwardapi-volume-8ec1ff34-990e-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:05:47.567: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-8ec1ff34-990e-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 19:05:47.774: INFO: Waiting for pod downwardapi-volume-8ec1ff34-990e-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:05:47.778: INFO: Pod downwardapi-volume-8ec1ff34-990e-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:05:47.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tvj7z" for this suite.
Jun 27 19:05:53.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:05:53.961: INFO: namespace: e2e-tests-downward-api-tvj7z, resource: bindings, ignored listing per whitelist
Jun 27 19:05:54.031: INFO: namespace e2e-tests-downward-api-tvj7z deletion completed in 6.242739961s

• [SLOW TEST:12.768 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:05:54.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-rt79l A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-rt79l;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-rt79l A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-rt79l;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-rt79l.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-rt79l.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-rt79l.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-rt79l.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-rt79l.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-rt79l.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-rt79l.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-rt79l.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-rt79l.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 141.248.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.248.141_udp@PTR;check="$$(dig +tcp +noall +answer +search 141.248.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.248.141_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-rt79l A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-rt79l;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-rt79l A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-rt79l;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-rt79l.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-rt79l.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-rt79l.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-rt79l.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-rt79l.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-rt79l.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-rt79l.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-rt79l.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-rt79l.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 141.248.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.248.141_udp@PTR;check="$$(dig +tcp +noall +answer +search 141.248.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.248.141_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 27 19:06:02.570: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.575: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.580: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-rt79l from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.585: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-rt79l from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.590: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-rt79l.svc from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.595: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-rt79l.svc from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.603: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.608: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.623: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-rt79l.svc from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.632: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-rt79l.svc from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.639: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.644: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.649: INFO: Unable to read 10.101.248.141_udp@PTR from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.653: INFO: Unable to read 10.101.248.141_tcp@PTR from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.658: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.662: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.666: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-rt79l from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.670: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-rt79l from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.676: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-rt79l.svc from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.684: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-rt79l.svc from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.689: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.698: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.703: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-rt79l.svc from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.709: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-rt79l.svc from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.714: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.718: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.730: INFO: Unable to read 10.101.248.141_udp@PTR from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.735: INFO: Unable to read 10.101.248.141_tcp@PTR from pod e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-96689c9b-990e-11e9-8fa9-0242ac110005)
Jun 27 19:06:02.735: INFO: Lookups using e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-rt79l wheezy_tcp@dns-test-service.e2e-tests-dns-rt79l wheezy_udp@dns-test-service.e2e-tests-dns-rt79l.svc wheezy_tcp@dns-test-service.e2e-tests-dns-rt79l.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-rt79l.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-rt79l.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.248.141_udp@PTR 10.101.248.141_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-rt79l jessie_tcp@dns-test-service.e2e-tests-dns-rt79l jessie_udp@dns-test-service.e2e-tests-dns-rt79l.svc jessie_tcp@dns-test-service.e2e-tests-dns-rt79l.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-rt79l.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-rt79l.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-rt79l.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.101.248.141_udp@PTR 10.101.248.141_tcp@PTR]

Jun 27 19:06:07.870: INFO: DNS probes using e2e-tests-dns-rt79l/dns-test-96689c9b-990e-11e9-8fa9-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:06:08.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-rt79l" for this suite.
Jun 27 19:06:14.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:06:14.376: INFO: namespace: e2e-tests-dns-rt79l, resource: bindings, ignored listing per whitelist
Jun 27 19:06:14.488: INFO: namespace e2e-tests-dns-rt79l deletion completed in 6.20938174s

• [SLOW TEST:20.456 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:06:14.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jun 27 19:06:14.839: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jun 27 19:06:14.854: INFO: Waiting for terminating namespaces to be deleted...
Jun 27 19:06:14.857: INFO: 
Logging pods the kubelet thinks is on node hunter-server-x6tdbol33slm before test
Jun 27 19:06:14.862: INFO: etcd-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jun 27 19:06:14.862: INFO: kube-controller-manager-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jun 27 19:06:14.862: INFO: kube-apiserver-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jun 27 19:06:14.862: INFO: weave-net-z4vkv from kube-system started at 2019-06-16 12:55:36 +0000 UTC (2 container statuses recorded)
Jun 27 19:06:14.862: INFO: 	Container weave ready: true, restart count 0
Jun 27 19:06:14.862: INFO: 	Container weave-npc ready: true, restart count 0
Jun 27 19:06:14.862: INFO: kube-scheduler-hunter-server-x6tdbol33slm from kube-system started at  (0 container statuses recorded)
Jun 27 19:06:14.862: INFO: coredns-86c58d9df4-99n2k from kube-system started at 2019-06-16 12:56:01 +0000 UTC (1 container statuses recorded)
Jun 27 19:06:14.862: INFO: 	Container coredns ready: true, restart count 0
Jun 27 19:06:14.862: INFO: coredns-86c58d9df4-zdm4x from kube-system started at 2019-06-16 12:56:01 +0000 UTC (1 container statuses recorded)
Jun 27 19:06:14.862: INFO: 	Container coredns ready: true, restart count 0
Jun 27 19:06:14.862: INFO: kube-proxy-ww64l from kube-system started at 2019-06-16 12:55:34 +0000 UTC (1 container statuses recorded)
Jun 27 19:06:14.862: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-x6tdbol33slm
Jun 27 19:06:14.897: INFO: Pod coredns-86c58d9df4-99n2k requesting resource cpu=100m on Node hunter-server-x6tdbol33slm
Jun 27 19:06:14.897: INFO: Pod coredns-86c58d9df4-zdm4x requesting resource cpu=100m on Node hunter-server-x6tdbol33slm
Jun 27 19:06:14.897: INFO: Pod etcd-hunter-server-x6tdbol33slm requesting resource cpu=0m on Node hunter-server-x6tdbol33slm
Jun 27 19:06:14.897: INFO: Pod kube-apiserver-hunter-server-x6tdbol33slm requesting resource cpu=250m on Node hunter-server-x6tdbol33slm
Jun 27 19:06:14.897: INFO: Pod kube-controller-manager-hunter-server-x6tdbol33slm requesting resource cpu=200m on Node hunter-server-x6tdbol33slm
Jun 27 19:06:14.897: INFO: Pod kube-proxy-ww64l requesting resource cpu=0m on Node hunter-server-x6tdbol33slm
Jun 27 19:06:14.897: INFO: Pod kube-scheduler-hunter-server-x6tdbol33slm requesting resource cpu=100m on Node hunter-server-x6tdbol33slm
Jun 27 19:06:14.897: INFO: Pod weave-net-z4vkv requesting resource cpu=20m on Node hunter-server-x6tdbol33slm
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a2a75ae5-990e-11e9-8fa9-0242ac110005.15ac23a7ec4a8aa1], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-x2l8w/filler-pod-a2a75ae5-990e-11e9-8fa9-0242ac110005 to hunter-server-x6tdbol33slm]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a2a75ae5-990e-11e9-8fa9-0242ac110005.15ac23a8408376e7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a2a75ae5-990e-11e9-8fa9-0242ac110005.15ac23a84ae02c30], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a2a75ae5-990e-11e9-8fa9-0242ac110005.15ac23a863aa3330], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ac23a8dbc8fd84], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-x6tdbol33slm
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:06:19.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-x2l8w" for this suite.
Jun 27 19:06:28.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:06:28.174: INFO: namespace: e2e-tests-sched-pred-x2l8w, resource: bindings, ignored listing per whitelist
Jun 27 19:06:28.179: INFO: namespace e2e-tests-sched-pred-x2l8w deletion completed in 8.185295098s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:13.691 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:06:28.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9zdfw
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 27 19:06:28.310: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jun 27 19:06:50.390: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9zdfw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:06:50.390: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:06:50.563: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:06:50.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-9zdfw" for this suite.
Jun 27 19:07:14.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:07:14.675: INFO: namespace: e2e-tests-pod-network-test-9zdfw, resource: bindings, ignored listing per whitelist
Jun 27 19:07:14.686: INFO: namespace e2e-tests-pod-network-test-9zdfw deletion completed in 24.119476124s

• [SLOW TEST:46.506 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:07:14.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jun 27 19:07:24.844: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mgnb2 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:07:24.844: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:07:25.054: INFO: Exec stderr: ""
Jun 27 19:07:25.054: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mgnb2 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:07:25.054: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:07:25.221: INFO: Exec stderr: ""
Jun 27 19:07:25.221: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mgnb2 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:07:25.221: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:07:25.385: INFO: Exec stderr: ""
Jun 27 19:07:25.385: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mgnb2 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:07:25.385: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:07:25.564: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jun 27 19:07:25.564: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mgnb2 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:07:25.564: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:07:25.731: INFO: Exec stderr: ""
Jun 27 19:07:25.731: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mgnb2 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:07:25.731: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:07:25.868: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jun 27 19:07:25.868: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mgnb2 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:07:25.868: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:07:26.039: INFO: Exec stderr: ""
Jun 27 19:07:26.039: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mgnb2 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:07:26.039: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:07:26.193: INFO: Exec stderr: ""
Jun 27 19:07:26.193: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mgnb2 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:07:26.193: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:07:26.358: INFO: Exec stderr: ""
Jun 27 19:07:26.358: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-mgnb2 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:07:26.358: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:07:26.600: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:07:26.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-mgnb2" for this suite.
Jun 27 19:08:18.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:08:18.668: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-mgnb2, resource: bindings, ignored listing per whitelist
Jun 27 19:08:18.745: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-mgnb2 deletion completed in 52.138373505s

• [SLOW TEST:64.060 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:08:18.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jun 27 19:08:18.839: INFO: PodSpec: initContainers in spec.initContainers
Jun 27 19:09:08.571: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ec8743b0-990e-11e9-8fa9-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-fhp4f", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-fhp4f/pods/pod-init-ec8743b0-990e-11e9-8fa9-0242ac110005", UID:"ec893bc4-990e-11e9-a678-fa163e0cec1d", ResourceVersion:"1378719", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63697259298, loc:(*time.Location)(0x7947a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"839037102"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ngckg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001f27840), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ngckg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ngckg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ngckg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001bd14a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-x6tdbol33slm", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001487b00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001bd1520)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001bd15a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001bd15a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001bd15ac)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697259298, loc:(*time.Location)(0x7947a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697259298, loc:(*time.Location)(0x7947a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697259298, loc:(*time.Location)(0x7947a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63697259298, loc:(*time.Location)(0x7947a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"192.168.100.12", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001b19ba0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000443650)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0004437a0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://5849c59be058a712f6792d05742aa54b3cfe1f0ec7565c14a511febff6a850d7"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001b19be0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001b19bc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:09:08.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-fhp4f" for this suite.
Jun 27 19:09:30.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:09:30.750: INFO: namespace: e2e-tests-init-container-fhp4f, resource: bindings, ignored listing per whitelist
Jun 27 19:09:30.840: INFO: namespace e2e-tests-init-container-fhp4f deletion completed in 22.152794352s

• [SLOW TEST:72.095 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:09:30.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jun 27 19:09:30.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-htcq6'
Jun 27 19:09:31.099: INFO: stderr: ""
Jun 27 19:09:31.099: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jun 27 19:09:32.103: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:09:32.103: INFO: Found 0 / 1
Jun 27 19:09:33.105: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:09:33.105: INFO: Found 0 / 1
Jun 27 19:09:34.103: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:09:34.103: INFO: Found 1 / 1
Jun 27 19:09:34.103: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jun 27 19:09:34.105: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:09:34.105: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun 27 19:09:34.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-vvqhc --namespace=e2e-tests-kubectl-htcq6 -p {"metadata":{"annotations":{"x":"y"}}}'
Jun 27 19:09:34.194: INFO: stderr: ""
Jun 27 19:09:34.194: INFO: stdout: "pod/redis-master-vvqhc patched\n"
STEP: checking annotations
Jun 27 19:09:34.196: INFO: Selector matched 1 pods for map[app:redis]
Jun 27 19:09:34.196: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:09:34.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-htcq6" for this suite.
Jun 27 19:09:56.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:09:56.235: INFO: namespace: e2e-tests-kubectl-htcq6, resource: bindings, ignored listing per whitelist
Jun 27 19:09:56.311: INFO: namespace e2e-tests-kubectl-htcq6 deletion completed in 22.112159587s

• [SLOW TEST:25.470 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:09:56.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 19:09:56.448: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26b4565b-990f-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-vwp4v" to be "success or failure"
Jun 27 19:09:56.458: INFO: Pod "downwardapi-volume-26b4565b-990f-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.703665ms
Jun 27 19:09:58.464: INFO: Pod "downwardapi-volume-26b4565b-990f-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015445218s
Jun 27 19:10:00.469: INFO: Pod "downwardapi-volume-26b4565b-990f-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020515506s
STEP: Saw pod success
Jun 27 19:10:00.469: INFO: Pod "downwardapi-volume-26b4565b-990f-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:10:00.474: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-26b4565b-990f-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 19:10:00.517: INFO: Waiting for pod downwardapi-volume-26b4565b-990f-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:10:00.521: INFO: Pod downwardapi-volume-26b4565b-990f-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:10:00.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vwp4v" for this suite.
Jun 27 19:10:06.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:10:06.646: INFO: namespace: e2e-tests-downward-api-vwp4v, resource: bindings, ignored listing per whitelist
Jun 27 19:10:06.698: INFO: namespace e2e-tests-downward-api-vwp4v deletion completed in 6.171392555s

• [SLOW TEST:10.387 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:10:06.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-xj7tb.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xj7tb.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-xj7tb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-xj7tb.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xj7tb.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-xj7tb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 27 19:10:12.941: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:12.953: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:12.962: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:12.969: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:12.976: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:12.983: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:12.989: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xj7tb.svc.cluster.local from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:12.994: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.001: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.006: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.011: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.017: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.023: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.029: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.043: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.048: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.052: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xj7tb.svc.cluster.local from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.056: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.060: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.063: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005: the server could not find the requested resource (get pods dns-test-2ce95345-990f-11e9-8fa9-0242ac110005)
Jun 27 19:10:13.063: INFO: Lookups using e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xj7tb.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xj7tb.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jun 27 19:10:18.155: INFO: DNS probes using e2e-tests-dns-xj7tb/dns-test-2ce95345-990f-11e9-8fa9-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:10:18.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-xj7tb" for this suite.
Jun 27 19:10:24.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:10:24.328: INFO: namespace: e2e-tests-dns-xj7tb, resource: bindings, ignored listing per whitelist
Jun 27 19:10:24.391: INFO: namespace e2e-tests-dns-xj7tb deletion completed in 6.092876367s

• [SLOW TEST:17.692 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:10:24.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-zxtp4
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 27 19:10:24.613: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jun 27 19:10:52.787: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-zxtp4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:10:52.787: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:10:53.057: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:10:53.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-zxtp4" for this suite.
Jun 27 19:11:15.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:11:15.142: INFO: namespace: e2e-tests-pod-network-test-zxtp4, resource: bindings, ignored listing per whitelist
Jun 27 19:11:15.209: INFO: namespace e2e-tests-pod-network-test-zxtp4 deletion completed in 22.145382243s

• [SLOW TEST:50.818 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:11:15.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-q8tpt
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-q8tpt
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-q8tpt
Jun 27 19:11:15.372: INFO: Found 0 stateful pods, waiting for 1
Jun 27 19:11:25.379: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jun 27 19:11:25.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q8tpt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 27 19:11:25.790: INFO: stderr: ""
Jun 27 19:11:25.790: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 27 19:11:25.790: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jun 27 19:11:25.795: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jun 27 19:11:35.801: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jun 27 19:11:35.802: INFO: Waiting for statefulset status.replicas updated to 0
Jun 27 19:11:35.836: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jun 27 19:11:35.836: INFO: ss-0  hunter-server-x6tdbol33slm  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  }]
Jun 27 19:11:35.836: INFO: ss-1                              Pending         []
Jun 27 19:11:35.836: INFO: 
Jun 27 19:11:35.836: INFO: StatefulSet ss has not reached scale 3, at 2
Jun 27 19:11:37.030: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984801192s
Jun 27 19:11:38.036: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.790231096s
Jun 27 19:11:39.041: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.78429037s
Jun 27 19:11:40.047: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.77995095s
Jun 27 19:11:41.051: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.773839016s
Jun 27 19:11:42.058: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.769824723s
Jun 27 19:11:43.066: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.762536069s
Jun 27 19:11:44.074: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.754617524s
Jun 27 19:11:45.078: INFO: Verifying statefulset ss doesn't scale past 3 for another 746.132694ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-q8tpt
Jun 27 19:11:46.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q8tpt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 27 19:11:46.290: INFO: stderr: ""
Jun 27 19:11:46.290: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 27 19:11:46.290: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jun 27 19:11:46.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q8tpt ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 27 19:11:46.504: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Jun 27 19:11:46.504: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 27 19:11:46.504: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jun 27 19:11:46.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q8tpt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 27 19:11:46.692: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Jun 27 19:11:46.692: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 27 19:11:46.692: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jun 27 19:11:46.697: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jun 27 19:11:56.704: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 27 19:11:56.704: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 27 19:11:56.704: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jun 27 19:11:56.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q8tpt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 27 19:11:57.080: INFO: stderr: ""
Jun 27 19:11:57.080: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 27 19:11:57.080: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jun 27 19:11:57.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q8tpt ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 27 19:11:57.336: INFO: stderr: ""
Jun 27 19:11:57.336: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 27 19:11:57.336: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jun 27 19:11:57.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q8tpt ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 27 19:11:57.671: INFO: stderr: ""
Jun 27 19:11:57.671: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 27 19:11:57.671: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jun 27 19:11:57.671: INFO: Waiting for statefulset status.replicas updated to 0
Jun 27 19:11:57.702: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jun 27 19:12:07.720: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jun 27 19:12:07.720: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jun 27 19:12:07.720: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jun 27 19:12:07.845: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jun 27 19:12:07.845: INFO: ss-0  hunter-server-x6tdbol33slm  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  }]
Jun 27 19:12:07.845: INFO: ss-1  hunter-server-x6tdbol33slm  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:07.845: INFO: ss-2  hunter-server-x6tdbol33slm  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:07.845: INFO: 
Jun 27 19:12:07.845: INFO: StatefulSet ss has not reached scale 0, at 3
Jun 27 19:12:08.853: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jun 27 19:12:08.853: INFO: ss-0  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  }]
Jun 27 19:12:08.853: INFO: ss-1  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:08.853: INFO: ss-2  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:08.853: INFO: 
Jun 27 19:12:08.853: INFO: StatefulSet ss has not reached scale 0, at 3
Jun 27 19:12:09.880: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jun 27 19:12:09.880: INFO: ss-0  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  }]
Jun 27 19:12:09.880: INFO: ss-1  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:09.880: INFO: ss-2  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:09.880: INFO: 
Jun 27 19:12:09.880: INFO: StatefulSet ss has not reached scale 0, at 3
Jun 27 19:12:10.888: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jun 27 19:12:10.888: INFO: ss-0  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  }]
Jun 27 19:12:10.888: INFO: ss-1  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:10.888: INFO: ss-2  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:10.888: INFO: 
Jun 27 19:12:10.888: INFO: StatefulSet ss has not reached scale 0, at 3
Jun 27 19:12:11.893: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jun 27 19:12:11.893: INFO: ss-0  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  }]
Jun 27 19:12:11.893: INFO: ss-1  hunter-server-x6tdbol33slm  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:11.893: INFO: ss-2  hunter-server-x6tdbol33slm  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:11.893: INFO: 
Jun 27 19:12:11.893: INFO: StatefulSet ss has not reached scale 0, at 3
Jun 27 19:12:12.901: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jun 27 19:12:12.901: INFO: ss-0  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  }]
Jun 27 19:12:12.901: INFO: ss-2  hunter-server-x6tdbol33slm  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:12.902: INFO: 
Jun 27 19:12:12.902: INFO: StatefulSet ss has not reached scale 0, at 2
Jun 27 19:12:13.905: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jun 27 19:12:13.905: INFO: ss-0  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  }]
Jun 27 19:12:13.905: INFO: ss-2  hunter-server-x6tdbol33slm  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:13.905: INFO: 
Jun 27 19:12:13.905: INFO: StatefulSet ss has not reached scale 0, at 2
Jun 27 19:12:14.909: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jun 27 19:12:14.909: INFO: ss-0  hunter-server-x6tdbol33slm  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:15 +0000 UTC  }]
Jun 27 19:12:14.909: INFO: ss-2  hunter-server-x6tdbol33slm  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-27 19:11:35 +0000 UTC  }]
Jun 27 19:12:14.909: INFO: 
Jun 27 19:12:14.909: INFO: StatefulSet ss has not reached scale 0, at 2
Jun 27 19:12:15.912: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.899561917s
Jun 27 19:12:16.917: INFO: Verifying statefulset ss doesn't scale past 0 for another 896.661227ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-q8tpt
Jun 27 19:12:17.924: INFO: Scaling statefulset ss to 0
Jun 27 19:12:17.941: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jun 27 19:12:17.946: INFO: Deleting all statefulset in ns e2e-tests-statefulset-q8tpt
Jun 27 19:12:17.951: INFO: Scaling statefulset ss to 0
Jun 27 19:12:17.967: INFO: Waiting for statefulset status.replicas updated to 0
Jun 27 19:12:17.971: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:12:17.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-q8tpt" for this suite.
Jun 27 19:12:24.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:12:24.131: INFO: namespace: e2e-tests-statefulset-q8tpt, resource: bindings, ignored listing per whitelist
Jun 27 19:12:24.171: INFO: namespace e2e-tests-statefulset-q8tpt deletion completed in 6.171823719s

• [SLOW TEST:68.962 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:12:24.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 19:12:24.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jun 27 19:12:24.464: INFO: stderr: ""
Jun 27 19:12:24.464: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.7\", GitCommit:\"4683545293d792934a7a7e12f2cc47d20b2dd01b\", GitTreeState:\"clean\", BuildDate:\"2019-06-27T17:43:29Z\", GoVersion:\"go1.11.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.7\", GitCommit:\"4683545293d792934a7a7e12f2cc47d20b2dd01b\", GitTreeState:\"clean\", BuildDate:\"2019-06-06T01:39:30Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:12:24.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j4fb4" for this suite.
Jun 27 19:12:30.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:12:30.579: INFO: namespace: e2e-tests-kubectl-j4fb4, resource: bindings, ignored listing per whitelist
Jun 27 19:12:30.631: INFO: namespace e2e-tests-kubectl-j4fb4 deletion completed in 6.162189338s

• [SLOW TEST:6.459 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:12:30.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun 27 19:12:30.852: INFO: Waiting up to 5m0s for pod "pod-82baa475-990f-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-47fr4" to be "success or failure"
Jun 27 19:12:30.856: INFO: Pod "pod-82baa475-990f-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328653ms
Jun 27 19:12:32.860: INFO: Pod "pod-82baa475-990f-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008107975s
Jun 27 19:12:34.921: INFO: Pod "pod-82baa475-990f-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069372232s
STEP: Saw pod success
Jun 27 19:12:34.921: INFO: Pod "pod-82baa475-990f-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:12:34.927: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-82baa475-990f-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 19:12:34.969: INFO: Waiting for pod pod-82baa475-990f-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:12:34.974: INFO: Pod pod-82baa475-990f-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:12:34.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-47fr4" for this suite.
Jun 27 19:12:41.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:12:41.120: INFO: namespace: e2e-tests-emptydir-47fr4, resource: bindings, ignored listing per whitelist
Jun 27 19:12:41.120: INFO: namespace e2e-tests-emptydir-47fr4 deletion completed in 6.143097022s

• [SLOW TEST:10.488 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:12:41.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-ww96k
Jun 27 19:12:47.352: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-ww96k
STEP: checking the pod's current state and verifying that restartCount is present
Jun 27 19:12:47.356: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:16:48.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ww96k" for this suite.
Jun 27 19:16:54.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:16:54.668: INFO: namespace: e2e-tests-container-probe-ww96k, resource: bindings, ignored listing per whitelist
Jun 27 19:16:54.711: INFO: namespace e2e-tests-container-probe-ww96k deletion completed in 6.08940312s

• [SLOW TEST:253.592 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:16:54.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-201e8d38-9910-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 19:16:54.917: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-20210e20-9910-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-d6n2m" to be "success or failure"
Jun 27 19:16:55.033: INFO: Pod "pod-projected-configmaps-20210e20-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 115.496849ms
Jun 27 19:16:57.036: INFO: Pod "pod-projected-configmaps-20210e20-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118618686s
Jun 27 19:16:59.041: INFO: Pod "pod-projected-configmaps-20210e20-9910-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123371332s
STEP: Saw pod success
Jun 27 19:16:59.041: INFO: Pod "pod-projected-configmaps-20210e20-9910-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:16:59.044: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-20210e20-9910-11e9-8fa9-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jun 27 19:16:59.093: INFO: Waiting for pod pod-projected-configmaps-20210e20-9910-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:16:59.101: INFO: Pod pod-projected-configmaps-20210e20-9910-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:16:59.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d6n2m" for this suite.
Jun 27 19:17:05.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:17:05.215: INFO: namespace: e2e-tests-projected-d6n2m, resource: bindings, ignored listing per whitelist
Jun 27 19:17:05.276: INFO: namespace e2e-tests-projected-d6n2m deletion completed in 6.170677709s

• [SLOW TEST:10.565 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:17:05.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-2659b9cc-9910-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 19:17:05.358: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-265a584f-9910-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-trx25" to be "success or failure"
Jun 27 19:17:05.367: INFO: Pod "pod-projected-configmaps-265a584f-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.522689ms
Jun 27 19:17:07.382: INFO: Pod "pod-projected-configmaps-265a584f-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0246062s
Jun 27 19:17:09.387: INFO: Pod "pod-projected-configmaps-265a584f-9910-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029754549s
STEP: Saw pod success
Jun 27 19:17:09.387: INFO: Pod "pod-projected-configmaps-265a584f-9910-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:17:09.391: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-265a584f-9910-11e9-8fa9-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jun 27 19:17:09.431: INFO: Waiting for pod pod-projected-configmaps-265a584f-9910-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:17:09.495: INFO: Pod pod-projected-configmaps-265a584f-9910-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:17:09.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-trx25" for this suite.
Jun 27 19:17:15.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:17:15.550: INFO: namespace: e2e-tests-projected-trx25, resource: bindings, ignored listing per whitelist
Jun 27 19:17:15.598: INFO: namespace e2e-tests-projected-trx25 deletion completed in 6.097449239s

• [SLOW TEST:10.322 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:17:15.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun 27 19:17:15.831: INFO: Waiting up to 5m0s for pod "pod-2c98ccaa-9910-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-98kz4" to be "success or failure"
Jun 27 19:17:15.835: INFO: Pod "pod-2c98ccaa-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.77032ms
Jun 27 19:17:17.964: INFO: Pod "pod-2c98ccaa-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132859842s
Jun 27 19:17:20.016: INFO: Pod "pod-2c98ccaa-9910-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.184657919s
STEP: Saw pod success
Jun 27 19:17:20.016: INFO: Pod "pod-2c98ccaa-9910-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:17:20.019: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-2c98ccaa-9910-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 19:17:20.469: INFO: Waiting for pod pod-2c98ccaa-9910-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:17:20.474: INFO: Pod pod-2c98ccaa-9910-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:17:20.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-98kz4" for this suite.
Jun 27 19:17:26.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:17:26.572: INFO: namespace: e2e-tests-emptydir-98kz4, resource: bindings, ignored listing per whitelist
Jun 27 19:17:26.643: INFO: namespace e2e-tests-emptydir-98kz4 deletion completed in 6.159172671s

• [SLOW TEST:11.045 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:17:26.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-33196a12-9910-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 19:17:26.822: INFO: Waiting up to 5m0s for pod "pod-secrets-332585c3-9910-11e9-8fa9-0242ac110005" in namespace "e2e-tests-secrets-lmpbr" to be "success or failure"
Jun 27 19:17:26.847: INFO: Pod "pod-secrets-332585c3-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.750661ms
Jun 27 19:17:28.857: INFO: Pod "pod-secrets-332585c3-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034677814s
Jun 27 19:17:30.868: INFO: Pod "pod-secrets-332585c3-9910-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045918423s
STEP: Saw pod success
Jun 27 19:17:30.868: INFO: Pod "pod-secrets-332585c3-9910-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:17:30.872: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-332585c3-9910-11e9-8fa9-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jun 27 19:17:30.923: INFO: Waiting for pod pod-secrets-332585c3-9910-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:17:30.937: INFO: Pod pod-secrets-332585c3-9910-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:17:30.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-lmpbr" for this suite.
Jun 27 19:17:37.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:17:37.065: INFO: namespace: e2e-tests-secrets-lmpbr, resource: bindings, ignored listing per whitelist
Jun 27 19:17:37.101: INFO: namespace e2e-tests-secrets-lmpbr deletion completed in 6.159230598s

• [SLOW TEST:10.458 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:17:37.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-mbgz6
Jun 27 19:17:41.230: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-mbgz6
STEP: checking the pod's current state and verifying that restartCount is present
Jun 27 19:17:41.233: INFO: Initial restart count of pod liveness-http is 0
Jun 27 19:17:59.363: INFO: Restart count of pod e2e-tests-container-probe-mbgz6/liveness-http is now 1 (18.129992512s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:17:59.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-mbgz6" for this suite.
Jun 27 19:18:05.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:18:05.468: INFO: namespace: e2e-tests-container-probe-mbgz6, resource: bindings, ignored listing per whitelist
Jun 27 19:18:05.562: INFO: namespace e2e-tests-container-probe-mbgz6 deletion completed in 6.155194061s

• [SLOW TEST:28.461 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:18:05.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jun 27 19:18:05.742: INFO: Waiting up to 5m0s for pod "var-expansion-4a546801-9910-11e9-8fa9-0242ac110005" in namespace "e2e-tests-var-expansion-vrpvw" to be "success or failure"
Jun 27 19:18:05.745: INFO: Pod "var-expansion-4a546801-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.53927ms
Jun 27 19:18:07.752: INFO: Pod "var-expansion-4a546801-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009714178s
Jun 27 19:18:09.760: INFO: Pod "var-expansion-4a546801-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017840225s
Jun 27 19:18:11.767: INFO: Pod "var-expansion-4a546801-9910-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025203784s
STEP: Saw pod success
Jun 27 19:18:11.767: INFO: Pod "var-expansion-4a546801-9910-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:18:11.774: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod var-expansion-4a546801-9910-11e9-8fa9-0242ac110005 container dapi-container: 
STEP: delete the pod
Jun 27 19:18:11.897: INFO: Waiting for pod var-expansion-4a546801-9910-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:18:11.925: INFO: Pod var-expansion-4a546801-9910-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:18:11.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-vrpvw" for this suite.
Jun 27 19:18:17.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:18:17.998: INFO: namespace: e2e-tests-var-expansion-vrpvw, resource: bindings, ignored listing per whitelist
Jun 27 19:18:18.058: INFO: namespace e2e-tests-var-expansion-vrpvw deletion completed in 6.127903211s

• [SLOW TEST:12.495 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:18:18.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-51c8e20f-9910-11e9-8fa9-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-51c8e260-9910-11e9-8fa9-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-51c8e20f-9910-11e9-8fa9-0242ac110005
STEP: Updating configmap cm-test-opt-upd-51c8e260-9910-11e9-8fa9-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-51c8e27b-9910-11e9-8fa9-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:19:51.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tczxt" for this suite.
Jun 27 19:20:13.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:20:13.301: INFO: namespace: e2e-tests-projected-tczxt, resource: bindings, ignored listing per whitelist
Jun 27 19:20:13.404: INFO: namespace e2e-tests-projected-tczxt deletion completed in 22.195057408s

• [SLOW TEST:115.346 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:20:13.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jun 27 19:20:13.615: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v7449,SelfLink:/api/v1/namespaces/e2e-tests-watch-v7449/configmaps/e2e-watch-test-configmap-a,UID:968d55f1-9910-11e9-a678-fa163e0cec1d,ResourceVersion:1380104,Generation:0,CreationTimestamp:2019-06-27 19:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jun 27 19:20:13.615: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v7449,SelfLink:/api/v1/namespaces/e2e-tests-watch-v7449/configmaps/e2e-watch-test-configmap-a,UID:968d55f1-9910-11e9-a678-fa163e0cec1d,ResourceVersion:1380104,Generation:0,CreationTimestamp:2019-06-27 19:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jun 27 19:20:23.626: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v7449,SelfLink:/api/v1/namespaces/e2e-tests-watch-v7449/configmaps/e2e-watch-test-configmap-a,UID:968d55f1-9910-11e9-a678-fa163e0cec1d,ResourceVersion:1380117,Generation:0,CreationTimestamp:2019-06-27 19:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jun 27 19:20:23.627: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v7449,SelfLink:/api/v1/namespaces/e2e-tests-watch-v7449/configmaps/e2e-watch-test-configmap-a,UID:968d55f1-9910-11e9-a678-fa163e0cec1d,ResourceVersion:1380117,Generation:0,CreationTimestamp:2019-06-27 19:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jun 27 19:20:33.634: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v7449,SelfLink:/api/v1/namespaces/e2e-tests-watch-v7449/configmaps/e2e-watch-test-configmap-a,UID:968d55f1-9910-11e9-a678-fa163e0cec1d,ResourceVersion:1380130,Generation:0,CreationTimestamp:2019-06-27 19:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jun 27 19:20:33.634: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v7449,SelfLink:/api/v1/namespaces/e2e-tests-watch-v7449/configmaps/e2e-watch-test-configmap-a,UID:968d55f1-9910-11e9-a678-fa163e0cec1d,ResourceVersion:1380130,Generation:0,CreationTimestamp:2019-06-27 19:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jun 27 19:20:43.649: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v7449,SelfLink:/api/v1/namespaces/e2e-tests-watch-v7449/configmaps/e2e-watch-test-configmap-a,UID:968d55f1-9910-11e9-a678-fa163e0cec1d,ResourceVersion:1380143,Generation:0,CreationTimestamp:2019-06-27 19:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jun 27 19:20:43.649: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-v7449,SelfLink:/api/v1/namespaces/e2e-tests-watch-v7449/configmaps/e2e-watch-test-configmap-a,UID:968d55f1-9910-11e9-a678-fa163e0cec1d,ResourceVersion:1380143,Generation:0,CreationTimestamp:2019-06-27 19:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jun 27 19:20:53.761: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-v7449,SelfLink:/api/v1/namespaces/e2e-tests-watch-v7449/configmaps/e2e-watch-test-configmap-b,UID:ae6f393b-9910-11e9-a678-fa163e0cec1d,ResourceVersion:1380156,Generation:0,CreationTimestamp:2019-06-27 19:20:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jun 27 19:20:53.761: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-v7449,SelfLink:/api/v1/namespaces/e2e-tests-watch-v7449/configmaps/e2e-watch-test-configmap-b,UID:ae6f393b-9910-11e9-a678-fa163e0cec1d,ResourceVersion:1380156,Generation:0,CreationTimestamp:2019-06-27 19:20:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jun 27 19:21:03.793: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-v7449,SelfLink:/api/v1/namespaces/e2e-tests-watch-v7449/configmaps/e2e-watch-test-configmap-b,UID:ae6f393b-9910-11e9-a678-fa163e0cec1d,ResourceVersion:1380169,Generation:0,CreationTimestamp:2019-06-27 19:20:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jun 27 19:21:03.793: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-v7449,SelfLink:/api/v1/namespaces/e2e-tests-watch-v7449/configmaps/e2e-watch-test-configmap-b,UID:ae6f393b-9910-11e9-a678-fa163e0cec1d,ResourceVersion:1380169,Generation:0,CreationTimestamp:2019-06-27 19:20:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:21:13.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-v7449" for this suite.
Jun 27 19:21:19.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:21:19.977: INFO: namespace: e2e-tests-watch-v7449, resource: bindings, ignored listing per whitelist
Jun 27 19:21:19.979: INFO: namespace e2e-tests-watch-v7449 deletion completed in 6.181030947s

• [SLOW TEST:66.575 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:21:19.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 19:21:20.218: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be428d93-9910-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-qd2t7" to be "success or failure"
Jun 27 19:21:20.229: INFO: Pod "downwardapi-volume-be428d93-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.30908ms
Jun 27 19:21:22.232: INFO: Pod "downwardapi-volume-be428d93-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013789401s
Jun 27 19:21:24.235: INFO: Pod "downwardapi-volume-be428d93-9910-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017383716s
STEP: Saw pod success
Jun 27 19:21:24.235: INFO: Pod "downwardapi-volume-be428d93-9910-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:21:24.237: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-be428d93-9910-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 19:21:24.293: INFO: Waiting for pod downwardapi-volume-be428d93-9910-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:21:24.415: INFO: Pod downwardapi-volume-be428d93-9910-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:21:24.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qd2t7" for this suite.
Jun 27 19:21:30.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:21:30.519: INFO: namespace: e2e-tests-downward-api-qd2t7, resource: bindings, ignored listing per whitelist
Jun 27 19:21:30.606: INFO: namespace e2e-tests-downward-api-qd2t7 deletion completed in 6.185993455s

• [SLOW TEST:10.627 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:21:30.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-c492efb4-9910-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 19:21:30.824: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4943c45-9910-11e9-8fa9-0242ac110005" in namespace "e2e-tests-configmap-z6wvj" to be "success or failure"
Jun 27 19:21:30.836: INFO: Pod "pod-configmaps-c4943c45-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.887557ms
Jun 27 19:21:32.865: INFO: Pod "pod-configmaps-c4943c45-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041175763s
Jun 27 19:21:34.900: INFO: Pod "pod-configmaps-c4943c45-9910-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076677836s
STEP: Saw pod success
Jun 27 19:21:34.900: INFO: Pod "pod-configmaps-c4943c45-9910-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:21:34.904: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-c4943c45-9910-11e9-8fa9-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jun 27 19:21:34.933: INFO: Waiting for pod pod-configmaps-c4943c45-9910-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:21:34.949: INFO: Pod pod-configmaps-c4943c45-9910-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:21:34.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-z6wvj" for this suite.
Jun 27 19:21:41.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:21:41.072: INFO: namespace: e2e-tests-configmap-z6wvj, resource: bindings, ignored listing per whitelist
Jun 27 19:21:41.113: INFO: namespace e2e-tests-configmap-z6wvj deletion completed in 6.158728414s

• [SLOW TEST:10.507 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:21:41.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jun 27 19:21:41.213: INFO: Waiting up to 5m0s for pod "downward-api-cac702c6-9910-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-c5xdx" to be "success or failure"
Jun 27 19:21:41.342: INFO: Pod "downward-api-cac702c6-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 129.162712ms
Jun 27 19:21:43.346: INFO: Pod "downward-api-cac702c6-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132294953s
Jun 27 19:21:45.349: INFO: Pod "downward-api-cac702c6-9910-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13619714s
STEP: Saw pod success
Jun 27 19:21:45.350: INFO: Pod "downward-api-cac702c6-9910-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:21:45.352: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downward-api-cac702c6-9910-11e9-8fa9-0242ac110005 container dapi-container: 
STEP: delete the pod
Jun 27 19:21:45.372: INFO: Waiting for pod downward-api-cac702c6-9910-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:21:45.398: INFO: Pod downward-api-cac702c6-9910-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:21:45.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-c5xdx" for this suite.
Jun 27 19:21:51.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:21:51.457: INFO: namespace: e2e-tests-downward-api-c5xdx, resource: bindings, ignored listing per whitelist
Jun 27 19:21:51.527: INFO: namespace e2e-tests-downward-api-c5xdx deletion completed in 6.124959043s

• [SLOW TEST:10.414 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:21:51.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d0fb614f-9910-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 19:21:51.730: INFO: Waiting up to 5m0s for pod "pod-secrets-d10a8c97-9910-11e9-8fa9-0242ac110005" in namespace "e2e-tests-secrets-9b586" to be "success or failure"
Jun 27 19:21:51.739: INFO: Pod "pod-secrets-d10a8c97-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.539629ms
Jun 27 19:21:53.749: INFO: Pod "pod-secrets-d10a8c97-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019517051s
Jun 27 19:21:55.754: INFO: Pod "pod-secrets-d10a8c97-9910-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024007365s
STEP: Saw pod success
Jun 27 19:21:55.754: INFO: Pod "pod-secrets-d10a8c97-9910-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:21:55.758: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-d10a8c97-9910-11e9-8fa9-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jun 27 19:21:55.825: INFO: Waiting for pod pod-secrets-d10a8c97-9910-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:21:55.839: INFO: Pod pod-secrets-d10a8c97-9910-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:21:55.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9b586" for this suite.
Jun 27 19:22:01.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:22:01.984: INFO: namespace: e2e-tests-secrets-9b586, resource: bindings, ignored listing per whitelist
Jun 27 19:22:02.097: INFO: namespace e2e-tests-secrets-9b586 deletion completed in 6.254506801s

• [SLOW TEST:10.571 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:22:02.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun 27 19:22:02.280: INFO: Waiting up to 5m0s for pod "pod-d754de0c-9910-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-8xp5f" to be "success or failure"
Jun 27 19:22:02.293: INFO: Pod "pod-d754de0c-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.924776ms
Jun 27 19:22:04.298: INFO: Pod "pod-d754de0c-9910-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017521772s
Jun 27 19:22:06.303: INFO: Pod "pod-d754de0c-9910-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023295435s
STEP: Saw pod success
Jun 27 19:22:06.303: INFO: Pod "pod-d754de0c-9910-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:22:06.309: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-d754de0c-9910-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 19:22:06.347: INFO: Waiting for pod pod-d754de0c-9910-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:22:06.385: INFO: Pod pod-d754de0c-9910-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:22:06.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8xp5f" for this suite.
Jun 27 19:22:12.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:22:12.531: INFO: namespace: e2e-tests-emptydir-8xp5f, resource: bindings, ignored listing per whitelist
Jun 27 19:22:12.608: INFO: namespace e2e-tests-emptydir-8xp5f deletion completed in 6.217723162s

• [SLOW TEST:10.511 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:22:12.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8nx5t
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 27 19:22:12.771: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jun 27 19:22:36.993: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8nx5t PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jun 27 19:22:36.993: INFO: >>> kubeConfig: /root/.kube/config
Jun 27 19:22:38.189: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:22:38.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-8nx5t" for this suite.
Jun 27 19:23:02.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:23:02.307: INFO: namespace: e2e-tests-pod-network-test-8nx5t, resource: bindings, ignored listing per whitelist
Jun 27 19:23:02.403: INFO: namespace e2e-tests-pod-network-test-8nx5t deletion completed in 24.208904039s

• [SLOW TEST:49.794 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:23:02.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0627 19:23:42.887121       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 27 19:23:42.887: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:23:42.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-qd2zq" for this suite.
Jun 27 19:24:02.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:24:03.112: INFO: namespace: e2e-tests-gc-qd2zq, resource: bindings, ignored listing per whitelist
Jun 27 19:24:03.113: INFO: namespace e2e-tests-gc-qd2zq deletion completed in 20.222066582s

• [SLOW TEST:60.710 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:24:03.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 19:24:03.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f8b34be-9911-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-gw5nm" to be "success or failure"
Jun 27 19:24:03.596: INFO: Pod "downwardapi-volume-1f8b34be-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 143.096127ms
Jun 27 19:24:05.599: INFO: Pod "downwardapi-volume-1f8b34be-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14658909s
Jun 27 19:24:07.606: INFO: Pod "downwardapi-volume-1f8b34be-9911-11e9-8fa9-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 4.153795523s
Jun 27 19:24:09.611: INFO: Pod "downwardapi-volume-1f8b34be-9911-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.158246211s
STEP: Saw pod success
Jun 27 19:24:09.611: INFO: Pod "downwardapi-volume-1f8b34be-9911-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:24:09.614: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-1f8b34be-9911-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 19:24:09.644: INFO: Waiting for pod downwardapi-volume-1f8b34be-9911-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:24:09.648: INFO: Pod downwardapi-volume-1f8b34be-9911-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:24:09.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gw5nm" for this suite.
Jun 27 19:24:15.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:24:15.806: INFO: namespace: e2e-tests-downward-api-gw5nm, resource: bindings, ignored listing per whitelist
Jun 27 19:24:15.845: INFO: namespace e2e-tests-downward-api-gw5nm deletion completed in 6.193521974s

• [SLOW TEST:12.731 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:24:15.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jun 27 19:24:16.225: INFO: Waiting up to 5m0s for pod "client-containers-272c1631-9911-11e9-8fa9-0242ac110005" in namespace "e2e-tests-containers-gmp9l" to be "success or failure"
Jun 27 19:24:16.254: INFO: Pod "client-containers-272c1631-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.791639ms
Jun 27 19:24:18.260: INFO: Pod "client-containers-272c1631-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035213913s
Jun 27 19:24:20.265: INFO: Pod "client-containers-272c1631-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0398795s
Jun 27 19:24:22.272: INFO: Pod "client-containers-272c1631-9911-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046753904s
STEP: Saw pod success
Jun 27 19:24:22.272: INFO: Pod "client-containers-272c1631-9911-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:24:22.276: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod client-containers-272c1631-9911-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 19:24:22.408: INFO: Waiting for pod client-containers-272c1631-9911-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:24:22.412: INFO: Pod client-containers-272c1631-9911-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:24:22.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-gmp9l" for this suite.
Jun 27 19:24:28.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:24:28.493: INFO: namespace: e2e-tests-containers-gmp9l, resource: bindings, ignored listing per whitelist
Jun 27 19:24:28.514: INFO: namespace e2e-tests-containers-gmp9l deletion completed in 6.097563527s

• [SLOW TEST:12.669 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:24:28.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:24:32.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-mmt5h" for this suite.
Jun 27 19:25:12.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:25:12.737: INFO: namespace: e2e-tests-kubelet-test-mmt5h, resource: bindings, ignored listing per whitelist
Jun 27 19:25:12.813: INFO: namespace e2e-tests-kubelet-test-mmt5h deletion completed in 40.106787259s

• [SLOW TEST:44.299 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:25:12.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 19:25:12.977: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48f77e61-9911-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-t8cw9" to be "success or failure"
Jun 27 19:25:12.984: INFO: Pod "downwardapi-volume-48f77e61-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.782671ms
Jun 27 19:25:15.010: INFO: Pod "downwardapi-volume-48f77e61-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033019979s
Jun 27 19:25:17.014: INFO: Pod "downwardapi-volume-48f77e61-9911-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037546481s
STEP: Saw pod success
Jun 27 19:25:17.014: INFO: Pod "downwardapi-volume-48f77e61-9911-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:25:17.019: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-48f77e61-9911-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 19:25:17.072: INFO: Waiting for pod downwardapi-volume-48f77e61-9911-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:25:17.095: INFO: Pod downwardapi-volume-48f77e61-9911-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:25:17.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t8cw9" for this suite.
Jun 27 19:25:23.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:25:23.227: INFO: namespace: e2e-tests-downward-api-t8cw9, resource: bindings, ignored listing per whitelist
Jun 27 19:25:23.298: INFO: namespace e2e-tests-downward-api-t8cw9 deletion completed in 6.199668329s

• [SLOW TEST:10.484 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:25:23.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-4f328a34-9911-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 19:25:23.436: INFO: Waiting up to 5m0s for pod "pod-secrets-4f33159c-9911-11e9-8fa9-0242ac110005" in namespace "e2e-tests-secrets-dfzg9" to be "success or failure"
Jun 27 19:25:23.452: INFO: Pod "pod-secrets-4f33159c-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.679816ms
Jun 27 19:25:25.473: INFO: Pod "pod-secrets-4f33159c-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037180304s
Jun 27 19:25:27.479: INFO: Pod "pod-secrets-4f33159c-9911-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043137816s
STEP: Saw pod success
Jun 27 19:25:27.479: INFO: Pod "pod-secrets-4f33159c-9911-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:25:27.483: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-4f33159c-9911-11e9-8fa9-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jun 27 19:25:27.603: INFO: Waiting for pod pod-secrets-4f33159c-9911-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:25:27.609: INFO: Pod pod-secrets-4f33159c-9911-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:25:27.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dfzg9" for this suite.
Jun 27 19:25:33.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:25:33.774: INFO: namespace: e2e-tests-secrets-dfzg9, resource: bindings, ignored listing per whitelist
Jun 27 19:25:33.875: INFO: namespace e2e-tests-secrets-dfzg9 deletion completed in 6.262033479s

• [SLOW TEST:10.577 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:25:33.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jun 27 19:25:34.030: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:25:34.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-krtdv" for this suite.
Jun 27 19:25:40.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:25:40.198: INFO: namespace: e2e-tests-kubectl-krtdv, resource: bindings, ignored listing per whitelist
Jun 27 19:25:40.202: INFO: namespace e2e-tests-kubectl-krtdv deletion completed in 6.10542988s

• [SLOW TEST:6.327 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:25:40.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-59564dab-9911-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume secrets
Jun 27 19:25:40.438: INFO: Waiting up to 5m0s for pod "pod-secrets-595d7805-9911-11e9-8fa9-0242ac110005" in namespace "e2e-tests-secrets-n9rbd" to be "success or failure"
Jun 27 19:25:40.445: INFO: Pod "pod-secrets-595d7805-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.895051ms
Jun 27 19:25:42.448: INFO: Pod "pod-secrets-595d7805-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010070275s
Jun 27 19:25:44.451: INFO: Pod "pod-secrets-595d7805-9911-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013262921s
STEP: Saw pod success
Jun 27 19:25:44.451: INFO: Pod "pod-secrets-595d7805-9911-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:25:44.453: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-secrets-595d7805-9911-11e9-8fa9-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jun 27 19:25:44.478: INFO: Waiting for pod pod-secrets-595d7805-9911-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:25:44.538: INFO: Pod pod-secrets-595d7805-9911-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:25:44.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-n9rbd" for this suite.
Jun 27 19:25:50.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:25:50.646: INFO: namespace: e2e-tests-secrets-n9rbd, resource: bindings, ignored listing per whitelist
Jun 27 19:25:50.656: INFO: namespace e2e-tests-secrets-n9rbd deletion completed in 6.115697282s
STEP: Destroying namespace "e2e-tests-secret-namespace-zrrns" for this suite.
Jun 27 19:25:56.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:25:56.717: INFO: namespace: e2e-tests-secret-namespace-zrrns, resource: bindings, ignored listing per whitelist
Jun 27 19:25:56.816: INFO: namespace e2e-tests-secret-namespace-zrrns deletion completed in 6.160274447s

• [SLOW TEST:16.613 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:25:56.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jun 27 19:25:57.050: INFO: Waiting up to 5m0s for pod "downward-api-6343b1b2-9911-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-rx8vc" to be "success or failure"
Jun 27 19:25:57.279: INFO: Pod "downward-api-6343b1b2-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 228.582516ms
Jun 27 19:25:59.290: INFO: Pod "downward-api-6343b1b2-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239387608s
Jun 27 19:26:01.294: INFO: Pod "downward-api-6343b1b2-9911-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2431783s
STEP: Saw pod success
Jun 27 19:26:01.294: INFO: Pod "downward-api-6343b1b2-9911-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:26:01.296: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downward-api-6343b1b2-9911-11e9-8fa9-0242ac110005 container dapi-container: 
STEP: delete the pod
Jun 27 19:26:01.334: INFO: Waiting for pod downward-api-6343b1b2-9911-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:26:01.593: INFO: Pod downward-api-6343b1b2-9911-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:26:01.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rx8vc" for this suite.
Jun 27 19:26:07.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:26:07.715: INFO: namespace: e2e-tests-downward-api-rx8vc, resource: bindings, ignored listing per whitelist
Jun 27 19:26:07.760: INFO: namespace e2e-tests-downward-api-rx8vc deletion completed in 6.159529042s

• [SLOW TEST:10.943 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:26:07.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jun 27 19:26:07.913: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:26:09.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-bn547" for this suite.
Jun 27 19:26:15.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:26:15.177: INFO: namespace: e2e-tests-custom-resource-definition-bn547, resource: bindings, ignored listing per whitelist
Jun 27 19:26:15.205: INFO: namespace e2e-tests-custom-resource-definition-bn547 deletion completed in 6.111467242s

• [SLOW TEST:7.445 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:26:15.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-w4bc4
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jun 27 19:26:15.387: INFO: Found 0 stateful pods, waiting for 3
Jun 27 19:26:25.391: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 27 19:26:25.391: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 27 19:26:25.391: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jun 27 19:26:35.392: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 27 19:26:35.392: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 27 19:26:35.392: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jun 27 19:26:35.416: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jun 27 19:26:45.468: INFO: Updating stateful set ss2
Jun 27 19:26:45.475: INFO: Waiting for Pod e2e-tests-statefulset-w4bc4/ss2-2 to have revision ss2-c79899b9 update revision ss2-787997d666
Jun 27 19:26:55.486: INFO: Waiting for Pod e2e-tests-statefulset-w4bc4/ss2-2 to have revision ss2-c79899b9 update revision ss2-787997d666
STEP: Restoring Pods to the correct revision when they are deleted
Jun 27 19:27:05.722: INFO: Found 2 stateful pods, waiting for 3
Jun 27 19:27:15.729: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 27 19:27:15.729: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 27 19:27:15.729: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jun 27 19:27:15.752: INFO: Updating stateful set ss2
Jun 27 19:27:15.884: INFO: Waiting for Pod e2e-tests-statefulset-w4bc4/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666
Jun 27 19:27:26.035: INFO: Updating stateful set ss2
Jun 27 19:27:26.070: INFO: Waiting for StatefulSet e2e-tests-statefulset-w4bc4/ss2 to complete update
Jun 27 19:27:26.070: INFO: Waiting for Pod e2e-tests-statefulset-w4bc4/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666
Jun 27 19:27:36.116: INFO: Waiting for StatefulSet e2e-tests-statefulset-w4bc4/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jun 27 19:27:46.076: INFO: Deleting all statefulset in ns e2e-tests-statefulset-w4bc4
Jun 27 19:27:46.079: INFO: Scaling statefulset ss2 to 0
Jun 27 19:28:06.097: INFO: Waiting for statefulset status.replicas updated to 0
Jun 27 19:28:06.100: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:28:06.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-w4bc4" for this suite.
Jun 27 19:28:14.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:28:14.201: INFO: namespace: e2e-tests-statefulset-w4bc4, resource: bindings, ignored listing per whitelist
Jun 27 19:28:14.203: INFO: namespace e2e-tests-statefulset-w4bc4 deletion completed in 8.088941478s

• [SLOW TEST:118.998 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:28:14.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jun 27 19:28:14.381: INFO: Waiting up to 5m0s for pod "downward-api-b51e4e76-9911-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-t46rq" to be "success or failure"
Jun 27 19:28:14.406: INFO: Pod "downward-api-b51e4e76-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.605747ms
Jun 27 19:28:16.408: INFO: Pod "downward-api-b51e4e76-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027375166s
Jun 27 19:28:18.518: INFO: Pod "downward-api-b51e4e76-9911-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136633793s
STEP: Saw pod success
Jun 27 19:28:18.518: INFO: Pod "downward-api-b51e4e76-9911-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:28:18.522: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downward-api-b51e4e76-9911-11e9-8fa9-0242ac110005 container dapi-container: 
STEP: delete the pod
Jun 27 19:28:18.603: INFO: Waiting for pod downward-api-b51e4e76-9911-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:28:18.805: INFO: Pod downward-api-b51e4e76-9911-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:28:18.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t46rq" for this suite.
Jun 27 19:28:25.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:28:25.092: INFO: namespace: e2e-tests-downward-api-t46rq, resource: bindings, ignored listing per whitelist
Jun 27 19:28:25.105: INFO: namespace e2e-tests-downward-api-t46rq deletion completed in 6.296236381s

• [SLOW TEST:10.901 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:28:25.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun 27 19:28:25.337: INFO: Waiting up to 5m0s for pod "pod-bba63575-9911-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-ddlzm" to be "success or failure"
Jun 27 19:28:25.401: INFO: Pod "pod-bba63575-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 63.527164ms
Jun 27 19:28:27.410: INFO: Pod "pod-bba63575-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072740526s
Jun 27 19:28:29.415: INFO: Pod "pod-bba63575-9911-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077494783s
STEP: Saw pod success
Jun 27 19:28:29.415: INFO: Pod "pod-bba63575-9911-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:28:29.418: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-bba63575-9911-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 19:28:29.499: INFO: Waiting for pod pod-bba63575-9911-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:28:29.508: INFO: Pod pod-bba63575-9911-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:28:29.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ddlzm" for this suite.
Jun 27 19:28:35.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:28:35.565: INFO: namespace: e2e-tests-emptydir-ddlzm, resource: bindings, ignored listing per whitelist
Jun 27 19:28:35.618: INFO: namespace e2e-tests-emptydir-ddlzm deletion completed in 6.099594367s

• [SLOW TEST:10.513 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:28:35.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-c1d8d9fd-9911-11e9-8fa9-0242ac110005
Jun 27 19:28:35.818: INFO: Pod name my-hostname-basic-c1d8d9fd-9911-11e9-8fa9-0242ac110005: Found 0 pods out of 1
Jun 27 19:28:40.824: INFO: Pod name my-hostname-basic-c1d8d9fd-9911-11e9-8fa9-0242ac110005: Found 1 pods out of 1
Jun 27 19:28:40.824: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c1d8d9fd-9911-11e9-8fa9-0242ac110005" are running
Jun 27 19:28:40.828: INFO: Pod "my-hostname-basic-c1d8d9fd-9911-11e9-8fa9-0242ac110005-dxqbw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-27 19:28:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-27 19:28:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-27 19:28:38 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-27 19:28:35 +0000 UTC Reason: Message:}])
Jun 27 19:28:40.828: INFO: Trying to dial the pod
Jun 27 19:28:45.842: INFO: Controller my-hostname-basic-c1d8d9fd-9911-11e9-8fa9-0242ac110005: Got expected result from replica 1 [my-hostname-basic-c1d8d9fd-9911-11e9-8fa9-0242ac110005-dxqbw]: "my-hostname-basic-c1d8d9fd-9911-11e9-8fa9-0242ac110005-dxqbw", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:28:45.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-z48lt" for this suite.
Jun 27 19:28:51.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:28:51.925: INFO: namespace: e2e-tests-replication-controller-z48lt, resource: bindings, ignored listing per whitelist
Jun 27 19:28:51.934: INFO: namespace e2e-tests-replication-controller-z48lt deletion completed in 6.087762106s

• [SLOW TEST:16.316 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:28:51.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun 27 19:28:52.072: INFO: Waiting up to 5m0s for pod "pod-cb96e475-9911-11e9-8fa9-0242ac110005" in namespace "e2e-tests-emptydir-mprql" to be "success or failure"
Jun 27 19:28:52.080: INFO: Pod "pod-cb96e475-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133296ms
Jun 27 19:28:54.114: INFO: Pod "pod-cb96e475-9911-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04246409s
Jun 27 19:28:56.120: INFO: Pod "pod-cb96e475-9911-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04766852s
STEP: Saw pod success
Jun 27 19:28:56.120: INFO: Pod "pod-cb96e475-9911-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:28:56.125: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-cb96e475-9911-11e9-8fa9-0242ac110005 container test-container: 
STEP: delete the pod
Jun 27 19:28:56.165: INFO: Waiting for pod pod-cb96e475-9911-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:28:56.175: INFO: Pod pod-cb96e475-9911-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:28:56.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mprql" for this suite.
Jun 27 19:29:02.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:29:02.533: INFO: namespace: e2e-tests-emptydir-mprql, resource: bindings, ignored listing per whitelist
Jun 27 19:29:02.571: INFO: namespace e2e-tests-emptydir-mprql deletion completed in 6.390516636s

• [SLOW TEST:10.638 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:29:02.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jun 27 19:29:10.895: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 27 19:29:10.901: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 27 19:29:12.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 27 19:29:12.908: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 27 19:29:14.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 27 19:29:14.905: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 27 19:29:16.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 27 19:29:16.904: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 27 19:29:18.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 27 19:29:18.909: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 27 19:29:20.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 27 19:29:20.905: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 27 19:29:22.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 27 19:29:22.905: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 27 19:29:24.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 27 19:29:24.907: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 27 19:29:26.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 27 19:29:26.909: INFO: Pod pod-with-prestop-exec-hook still exists
Jun 27 19:29:28.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jun 27 19:29:28.911: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:29:28.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dbd7q" for this suite.
Jun 27 19:29:50.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:29:50.984: INFO: namespace: e2e-tests-container-lifecycle-hook-dbd7q, resource: bindings, ignored listing per whitelist
Jun 27 19:29:51.059: INFO: namespace e2e-tests-container-lifecycle-hook-dbd7q deletion completed in 22.124409975s

• [SLOW TEST:48.488 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:29:51.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0627 19:30:01.218168       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 27 19:30:01.218: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:30:01.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-88hnn" for this suite.
Jun 27 19:30:07.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:30:07.332: INFO: namespace: e2e-tests-gc-88hnn, resource: bindings, ignored listing per whitelist
Jun 27 19:30:07.346: INFO: namespace e2e-tests-gc-88hnn deletion completed in 6.124457332s

• [SLOW TEST:16.286 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:30:07.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-j28lv
I0627 19:30:07.609924       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-j28lv, replica count: 1
I0627 19:30:08.660395       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0627 19:30:09.660646       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0627 19:30:10.660870       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0627 19:30:11.661119       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jun 27 19:30:11.789: INFO: Created: latency-svc-66k97
Jun 27 19:30:11.818: INFO: Got endpoints: latency-svc-66k97 [57.287871ms]
Jun 27 19:30:11.920: INFO: Created: latency-svc-vlm77
Jun 27 19:30:11.940: INFO: Got endpoints: latency-svc-vlm77 [120.795142ms]
Jun 27 19:30:12.003: INFO: Created: latency-svc-7xhj4
Jun 27 19:30:12.013: INFO: Got endpoints: latency-svc-7xhj4 [194.227709ms]
Jun 27 19:30:12.159: INFO: Created: latency-svc-n9pw2
Jun 27 19:30:12.163: INFO: Got endpoints: latency-svc-n9pw2 [344.39442ms]
Jun 27 19:30:12.239: INFO: Created: latency-svc-x5bcv
Jun 27 19:30:12.246: INFO: Got endpoints: latency-svc-x5bcv [427.780918ms]
Jun 27 19:30:12.501: INFO: Created: latency-svc-nkdg8
Jun 27 19:30:12.526: INFO: Got endpoints: latency-svc-nkdg8 [707.230581ms]
Jun 27 19:30:12.768: INFO: Created: latency-svc-bg5xj
Jun 27 19:30:12.775: INFO: Got endpoints: latency-svc-bg5xj [955.389798ms]
Jun 27 19:30:12.858: INFO: Created: latency-svc-94n9z
Jun 27 19:30:13.113: INFO: Got endpoints: latency-svc-94n9z [1.29367734s]
Jun 27 19:30:13.125: INFO: Created: latency-svc-52lwh
Jun 27 19:30:13.160: INFO: Got endpoints: latency-svc-52lwh [1.341077876s]
Jun 27 19:30:13.346: INFO: Created: latency-svc-q2dwn
Jun 27 19:30:13.351: INFO: Got endpoints: latency-svc-q2dwn [1.532792838s]
Jun 27 19:30:13.433: INFO: Created: latency-svc-dwz5b
Jun 27 19:30:13.436: INFO: Got endpoints: latency-svc-dwz5b [1.617684358s]
Jun 27 19:30:13.555: INFO: Created: latency-svc-wmlgl
Jun 27 19:30:13.574: INFO: Got endpoints: latency-svc-wmlgl [1.755132491s]
Jun 27 19:30:13.641: INFO: Created: latency-svc-gt495
Jun 27 19:30:13.808: INFO: Got endpoints: latency-svc-gt495 [1.989132285s]
Jun 27 19:30:13.823: INFO: Created: latency-svc-vd4dq
Jun 27 19:30:13.845: INFO: Got endpoints: latency-svc-vd4dq [2.026568247s]
Jun 27 19:30:13.910: INFO: Created: latency-svc-8gcr6
Jun 27 19:30:14.046: INFO: Got endpoints: latency-svc-8gcr6 [2.22676319s]
Jun 27 19:30:14.233: INFO: Created: latency-svc-kxdxb
Jun 27 19:30:14.238: INFO: Got endpoints: latency-svc-kxdxb [2.418611833s]
Jun 27 19:30:14.327: INFO: Created: latency-svc-6j96n
Jun 27 19:30:14.539: INFO: Got endpoints: latency-svc-6j96n [2.599452596s]
Jun 27 19:30:14.547: INFO: Created: latency-svc-9fb2g
Jun 27 19:30:14.555: INFO: Got endpoints: latency-svc-9fb2g [2.542425375s]
Jun 27 19:30:14.794: INFO: Created: latency-svc-nvb9k
Jun 27 19:30:14.794: INFO: Got endpoints: latency-svc-nvb9k [254.953314ms]
Jun 27 19:30:14.887: INFO: Created: latency-svc-bdzvj
Jun 27 19:30:14.982: INFO: Got endpoints: latency-svc-bdzvj [2.818991806s]
Jun 27 19:30:15.061: INFO: Created: latency-svc-96x49
Jun 27 19:30:15.189: INFO: Got endpoints: latency-svc-96x49 [2.942844399s]
Jun 27 19:30:15.286: INFO: Created: latency-svc-7rmnd
Jun 27 19:30:15.402: INFO: Got endpoints: latency-svc-7rmnd [2.875598019s]
Jun 27 19:30:15.424: INFO: Created: latency-svc-cn75k
Jun 27 19:30:15.428: INFO: Got endpoints: latency-svc-cn75k [2.653772221s]
Jun 27 19:30:15.498: INFO: Created: latency-svc-ktxcl
Jun 27 19:30:15.629: INFO: Got endpoints: latency-svc-ktxcl [2.516316847s]
Jun 27 19:30:15.641: INFO: Created: latency-svc-czdrb
Jun 27 19:30:15.645: INFO: Got endpoints: latency-svc-czdrb [2.484992134s]
Jun 27 19:30:15.711: INFO: Created: latency-svc-78858
Jun 27 19:30:15.725: INFO: Got endpoints: latency-svc-78858 [2.373815687s]
Jun 27 19:30:15.986: INFO: Created: latency-svc-z9r4v
Jun 27 19:30:15.990: INFO: Got endpoints: latency-svc-z9r4v [2.553437781s]
Jun 27 19:30:16.076: INFO: Created: latency-svc-g6n77
Jun 27 19:30:16.275: INFO: Got endpoints: latency-svc-g6n77 [2.700846038s]
Jun 27 19:30:16.296: INFO: Created: latency-svc-qz9fj
Jun 27 19:30:16.324: INFO: Got endpoints: latency-svc-qz9fj [2.516424627s]
Jun 27 19:30:16.502: INFO: Created: latency-svc-2hl25
Jun 27 19:30:16.509: INFO: Got endpoints: latency-svc-2hl25 [2.664125365s]
Jun 27 19:30:16.585: INFO: Created: latency-svc-2dql4
Jun 27 19:30:16.588: INFO: Got endpoints: latency-svc-2dql4 [2.542255287s]
Jun 27 19:30:16.753: INFO: Created: latency-svc-k8dcn
Jun 27 19:30:16.764: INFO: Got endpoints: latency-svc-k8dcn [2.526302288s]
Jun 27 19:30:16.989: INFO: Created: latency-svc-wpsrp
Jun 27 19:30:16.991: INFO: Got endpoints: latency-svc-wpsrp [2.435792208s]
Jun 27 19:30:17.072: INFO: Created: latency-svc-l75sz
Jun 27 19:30:17.080: INFO: Got endpoints: latency-svc-l75sz [2.286229901s]
Jun 27 19:30:17.246: INFO: Created: latency-svc-6cpmg
Jun 27 19:30:17.248: INFO: Got endpoints: latency-svc-6cpmg [2.265055681s]
Jun 27 19:30:17.310: INFO: Created: latency-svc-dhx7b
Jun 27 19:30:17.318: INFO: Got endpoints: latency-svc-dhx7b [2.128563994s]
Jun 27 19:30:17.494: INFO: Created: latency-svc-7t4pf
Jun 27 19:30:17.496: INFO: Got endpoints: latency-svc-7t4pf [2.094225829s]
Jun 27 19:30:17.585: INFO: Created: latency-svc-q947c
Jun 27 19:30:17.585: INFO: Got endpoints: latency-svc-q947c [2.156255646s]
Jun 27 19:30:17.756: INFO: Created: latency-svc-h4mmd
Jun 27 19:30:17.763: INFO: Got endpoints: latency-svc-h4mmd [2.133575666s]
Jun 27 19:30:17.835: INFO: Created: latency-svc-q7hhf
Jun 27 19:30:17.840: INFO: Got endpoints: latency-svc-q7hhf [2.194333259s]
Jun 27 19:30:17.994: INFO: Created: latency-svc-7ln9h
Jun 27 19:30:18.004: INFO: Got endpoints: latency-svc-7ln9h [2.278959693s]
Jun 27 19:30:18.071: INFO: Created: latency-svc-d8mbq
Jun 27 19:30:18.077: INFO: Got endpoints: latency-svc-d8mbq [2.086935652s]
Jun 27 19:30:18.222: INFO: Created: latency-svc-kmrsj
Jun 27 19:30:18.257: INFO: Got endpoints: latency-svc-kmrsj [1.981898255s]
Jun 27 19:30:18.412: INFO: Created: latency-svc-znrcv
Jun 27 19:30:18.417: INFO: Got endpoints: latency-svc-znrcv [2.092341463s]
Jun 27 19:30:18.485: INFO: Created: latency-svc-5cmbl
Jun 27 19:30:18.488: INFO: Got endpoints: latency-svc-5cmbl [1.978005831s]
Jun 27 19:30:18.638: INFO: Created: latency-svc-qwmc8
Jun 27 19:30:18.643: INFO: Got endpoints: latency-svc-qwmc8 [2.054804038s]
Jun 27 19:30:18.723: INFO: Created: latency-svc-hqzwf
Jun 27 19:30:18.874: INFO: Got endpoints: latency-svc-hqzwf [2.109420204s]
Jun 27 19:30:18.892: INFO: Created: latency-svc-m46hb
Jun 27 19:30:18.914: INFO: Got endpoints: latency-svc-m46hb [1.923084958s]
Jun 27 19:30:19.061: INFO: Created: latency-svc-p8pqj
Jun 27 19:30:19.075: INFO: Got endpoints: latency-svc-p8pqj [1.994015891s]
Jun 27 19:30:19.156: INFO: Created: latency-svc-msjdt
Jun 27 19:30:19.275: INFO: Got endpoints: latency-svc-msjdt [2.027251558s]
Jun 27 19:30:19.288: INFO: Created: latency-svc-c7l92
Jun 27 19:30:19.293: INFO: Got endpoints: latency-svc-c7l92 [1.975531943s]
Jun 27 19:30:19.377: INFO: Created: latency-svc-mlplh
Jun 27 19:30:19.536: INFO: Got endpoints: latency-svc-mlplh [2.040054574s]
Jun 27 19:30:19.557: INFO: Created: latency-svc-9j2mj
Jun 27 19:30:19.575: INFO: Got endpoints: latency-svc-9j2mj [1.989821553s]
Jun 27 19:30:19.741: INFO: Created: latency-svc-7bf4z
Jun 27 19:30:19.743: INFO: Got endpoints: latency-svc-7bf4z [1.979759894s]
Jun 27 19:30:19.803: INFO: Created: latency-svc-np25z
Jun 27 19:30:19.806: INFO: Got endpoints: latency-svc-np25z [1.96663481s]
Jun 27 19:30:19.975: INFO: Created: latency-svc-fmgql
Jun 27 19:30:19.980: INFO: Got endpoints: latency-svc-fmgql [1.975808831s]
Jun 27 19:30:20.055: INFO: Created: latency-svc-4w7ww
Jun 27 19:30:20.070: INFO: Got endpoints: latency-svc-4w7ww [1.993318577s]
Jun 27 19:30:20.212: INFO: Created: latency-svc-g58n5
Jun 27 19:30:20.215: INFO: Got endpoints: latency-svc-g58n5 [1.95848847s]
Jun 27 19:30:20.408: INFO: Created: latency-svc-5z25n
Jun 27 19:30:20.408: INFO: Got endpoints: latency-svc-5z25n [1.991358937s]
Jun 27 19:30:20.498: INFO: Created: latency-svc-48dpn
Jun 27 19:30:20.575: INFO: Got endpoints: latency-svc-48dpn [2.087377504s]
Jun 27 19:30:20.634: INFO: Created: latency-svc-v7jtx
Jun 27 19:30:20.636: INFO: Got endpoints: latency-svc-v7jtx [1.993304164s]
Jun 27 19:30:20.813: INFO: Created: latency-svc-lvq2d
Jun 27 19:30:20.824: INFO: Got endpoints: latency-svc-lvq2d [1.949929985s]
Jun 27 19:30:21.076: INFO: Created: latency-svc-bqhg2
Jun 27 19:30:21.087: INFO: Got endpoints: latency-svc-bqhg2 [2.173291713s]
Jun 27 19:30:21.391: INFO: Created: latency-svc-frqvd
Jun 27 19:30:21.392: INFO: Got endpoints: latency-svc-frqvd [2.31745273s]
Jun 27 19:30:21.662: INFO: Created: latency-svc-fbvfl
Jun 27 19:30:21.670: INFO: Got endpoints: latency-svc-fbvfl [2.395017315s]
Jun 27 19:30:21.933: INFO: Created: latency-svc-mvv6p
Jun 27 19:30:21.947: INFO: Got endpoints: latency-svc-mvv6p [2.65360229s]
Jun 27 19:30:22.205: INFO: Created: latency-svc-hjzzh
Jun 27 19:30:22.226: INFO: Got endpoints: latency-svc-hjzzh [2.689193661s]
Jun 27 19:30:22.439: INFO: Created: latency-svc-8hsrz
Jun 27 19:30:22.439: INFO: Got endpoints: latency-svc-8hsrz [2.864630003s]
Jun 27 19:30:22.706: INFO: Created: latency-svc-w8wwk
Jun 27 19:30:22.706: INFO: Got endpoints: latency-svc-w8wwk [2.962963568s]
Jun 27 19:30:22.872: INFO: Created: latency-svc-mk8dv
Jun 27 19:30:22.874: INFO: Got endpoints: latency-svc-mk8dv [3.067301784s]
Jun 27 19:30:23.159: INFO: Created: latency-svc-mlvjf
Jun 27 19:30:23.162: INFO: Got endpoints: latency-svc-mlvjf [3.181944705s]
Jun 27 19:30:23.422: INFO: Created: latency-svc-l9zhw
Jun 27 19:30:23.422: INFO: Got endpoints: latency-svc-l9zhw [3.351660843s]
Jun 27 19:30:23.676: INFO: Created: latency-svc-szlkp
Jun 27 19:30:23.677: INFO: Got endpoints: latency-svc-szlkp [3.461516817s]
Jun 27 19:30:23.893: INFO: Created: latency-svc-zxg29
Jun 27 19:30:23.905: INFO: Got endpoints: latency-svc-zxg29 [3.496511305s]
Jun 27 19:30:24.161: INFO: Created: latency-svc-sj8tg
Jun 27 19:30:24.163: INFO: Got endpoints: latency-svc-sj8tg [3.587582093s]
Jun 27 19:30:24.592: INFO: Created: latency-svc-6dhjd
Jun 27 19:30:24.647: INFO: Got endpoints: latency-svc-6dhjd [4.011220485s]
Jun 27 19:30:24.884: INFO: Created: latency-svc-sxg6t
Jun 27 19:30:24.933: INFO: Got endpoints: latency-svc-sxg6t [4.109592074s]
Jun 27 19:30:25.352: INFO: Created: latency-svc-btvq4
Jun 27 19:30:25.355: INFO: Got endpoints: latency-svc-btvq4 [4.267249558s]
Jun 27 19:30:26.060: INFO: Created: latency-svc-7j44w
Jun 27 19:30:26.263: INFO: Got endpoints: latency-svc-7j44w [4.87114085s]
Jun 27 19:30:26.288: INFO: Created: latency-svc-5cgr4
Jun 27 19:30:26.310: INFO: Got endpoints: latency-svc-5cgr4 [4.640247693s]
Jun 27 19:30:26.504: INFO: Created: latency-svc-pv7sl
Jun 27 19:30:26.507: INFO: Got endpoints: latency-svc-pv7sl [4.559696431s]
Jun 27 19:30:26.786: INFO: Created: latency-svc-s7kv4
Jun 27 19:30:26.804: INFO: Got endpoints: latency-svc-s7kv4 [4.578790281s]
Jun 27 19:30:27.047: INFO: Created: latency-svc-292w2
Jun 27 19:30:27.053: INFO: Got endpoints: latency-svc-292w2 [4.614159058s]
Jun 27 19:30:27.054: INFO: Created: latency-svc-xrmw8
Jun 27 19:30:27.062: INFO: Got endpoints: latency-svc-xrmw8 [4.356605196s]
Jun 27 19:30:27.128: INFO: Created: latency-svc-mkdhf
Jun 27 19:30:27.262: INFO: Got endpoints: latency-svc-mkdhf [4.38786885s]
Jun 27 19:30:27.271: INFO: Created: latency-svc-v27cd
Jun 27 19:30:27.277: INFO: Got endpoints: latency-svc-v27cd [4.114719446s]
Jun 27 19:30:27.332: INFO: Created: latency-svc-pnqp5
Jun 27 19:30:27.520: INFO: Got endpoints: latency-svc-pnqp5 [4.098444861s]
Jun 27 19:30:27.526: INFO: Created: latency-svc-9wj8k
Jun 27 19:30:27.530: INFO: Got endpoints: latency-svc-9wj8k [3.853728851s]
Jun 27 19:30:27.603: INFO: Created: latency-svc-lgw2m
Jun 27 19:30:27.779: INFO: Got endpoints: latency-svc-lgw2m [3.874344088s]
Jun 27 19:30:27.787: INFO: Created: latency-svc-gflhx
Jun 27 19:30:27.795: INFO: Got endpoints: latency-svc-gflhx [3.632146492s]
Jun 27 19:30:27.855: INFO: Created: latency-svc-4npb4
Jun 27 19:30:27.865: INFO: Got endpoints: latency-svc-4npb4 [3.217870436s]
Jun 27 19:30:27.993: INFO: Created: latency-svc-sn87j
Jun 27 19:30:27.995: INFO: Got endpoints: latency-svc-sn87j [3.06136803s]
Jun 27 19:30:28.068: INFO: Created: latency-svc-twfm2
Jun 27 19:30:28.074: INFO: Got endpoints: latency-svc-twfm2 [2.719224706s]
Jun 27 19:30:28.219: INFO: Created: latency-svc-snq2b
Jun 27 19:30:28.222: INFO: Got endpoints: latency-svc-snq2b [1.958617019s]
Jun 27 19:30:28.315: INFO: Created: latency-svc-6hfh4
Jun 27 19:30:28.388: INFO: Got endpoints: latency-svc-6hfh4 [2.077465037s]
Jun 27 19:30:28.493: INFO: Created: latency-svc-x8lpv
Jun 27 19:30:28.596: INFO: Got endpoints: latency-svc-x8lpv [2.088966751s]
Jun 27 19:30:28.607: INFO: Created: latency-svc-pd46s
Jun 27 19:30:28.612: INFO: Got endpoints: latency-svc-pd46s [1.807108423s]
Jun 27 19:30:28.671: INFO: Created: latency-svc-w6qqn
Jun 27 19:30:28.674: INFO: Got endpoints: latency-svc-w6qqn [1.620528247s]
Jun 27 19:30:28.843: INFO: Created: latency-svc-svt7l
Jun 27 19:30:28.854: INFO: Got endpoints: latency-svc-svt7l [1.79197308s]
Jun 27 19:30:28.939: INFO: Created: latency-svc-fpjsd
Jun 27 19:30:29.073: INFO: Got endpoints: latency-svc-fpjsd [1.810888475s]
Jun 27 19:30:29.085: INFO: Created: latency-svc-cflwr
Jun 27 19:30:29.088: INFO: Got endpoints: latency-svc-cflwr [1.811668413s]
Jun 27 19:30:29.159: INFO: Created: latency-svc-244lw
Jun 27 19:30:29.284: INFO: Got endpoints: latency-svc-244lw [1.763852893s]
Jun 27 19:30:29.292: INFO: Created: latency-svc-7hrxl
Jun 27 19:30:29.297: INFO: Got endpoints: latency-svc-7hrxl [1.766729975s]
Jun 27 19:30:29.351: INFO: Created: latency-svc-hpmjj
Jun 27 19:30:29.358: INFO: Got endpoints: latency-svc-hpmjj [1.578433236s]
Jun 27 19:30:29.523: INFO: Created: latency-svc-tcmzg
Jun 27 19:30:29.528: INFO: Got endpoints: latency-svc-tcmzg [1.7334762s]
Jun 27 19:30:29.609: INFO: Created: latency-svc-l7f9k
Jun 27 19:30:29.779: INFO: Got endpoints: latency-svc-l7f9k [1.913978314s]
Jun 27 19:30:29.784: INFO: Created: latency-svc-vbwhc
Jun 27 19:30:29.797: INFO: Got endpoints: latency-svc-vbwhc [1.801874887s]
Jun 27 19:30:29.989: INFO: Created: latency-svc-xqw9r
Jun 27 19:30:30.055: INFO: Got endpoints: latency-svc-xqw9r [1.981048629s]
Jun 27 19:30:30.066: INFO: Created: latency-svc-525gw
Jun 27 19:30:30.068: INFO: Got endpoints: latency-svc-525gw [1.845790831s]
Jun 27 19:30:30.247: INFO: Created: latency-svc-f2btn
Jun 27 19:30:30.400: INFO: Got endpoints: latency-svc-f2btn [2.011888717s]
Jun 27 19:30:30.410: INFO: Created: latency-svc-97bmw
Jun 27 19:30:30.422: INFO: Got endpoints: latency-svc-97bmw [1.826476342s]
Jun 27 19:30:30.489: INFO: Created: latency-svc-cq5r5
Jun 27 19:30:30.495: INFO: Got endpoints: latency-svc-cq5r5 [1.88343898s]
Jun 27 19:30:30.632: INFO: Created: latency-svc-w9z6k
Jun 27 19:30:30.643: INFO: Got endpoints: latency-svc-w9z6k [1.968540519s]
Jun 27 19:30:30.794: INFO: Created: latency-svc-rs7hp
Jun 27 19:30:30.798: INFO: Got endpoints: latency-svc-rs7hp [1.943926462s]
Jun 27 19:30:30.870: INFO: Created: latency-svc-df74w
Jun 27 19:30:30.872: INFO: Got endpoints: latency-svc-df74w [1.799303554s]
Jun 27 19:30:31.012: INFO: Created: latency-svc-77kbb
Jun 27 19:30:31.027: INFO: Got endpoints: latency-svc-77kbb [1.938908879s]
Jun 27 19:30:31.077: INFO: Created: latency-svc-kmtbl
Jun 27 19:30:31.080: INFO: Got endpoints: latency-svc-kmtbl [1.795349056s]
Jun 27 19:30:31.233: INFO: Created: latency-svc-9zgwv
Jun 27 19:30:31.237: INFO: Got endpoints: latency-svc-9zgwv [1.939510292s]
Jun 27 19:30:31.309: INFO: Created: latency-svc-hl6dr
Jun 27 19:30:31.324: INFO: Got endpoints: latency-svc-hl6dr [1.965813027s]
Jun 27 19:30:31.437: INFO: Created: latency-svc-r7jgt
Jun 27 19:30:31.446: INFO: Got endpoints: latency-svc-r7jgt [1.917430752s]
Jun 27 19:30:31.503: INFO: Created: latency-svc-vjwjj
Jun 27 19:30:31.508: INFO: Got endpoints: latency-svc-vjwjj [1.728223461s]
Jun 27 19:30:31.636: INFO: Created: latency-svc-fr85x
Jun 27 19:30:31.640: INFO: Got endpoints: latency-svc-fr85x [1.843480978s]
Jun 27 19:30:31.716: INFO: Created: latency-svc-gw56s
Jun 27 19:30:31.716: INFO: Got endpoints: latency-svc-gw56s [1.660364772s]
Jun 27 19:30:31.875: INFO: Created: latency-svc-j5nkc
Jun 27 19:30:31.878: INFO: Got endpoints: latency-svc-j5nkc [1.809897479s]
Jun 27 19:30:31.962: INFO: Created: latency-svc-r66gd
Jun 27 19:30:32.087: INFO: Got endpoints: latency-svc-r66gd [1.687087617s]
Jun 27 19:30:32.097: INFO: Created: latency-svc-xb857
Jun 27 19:30:32.102: INFO: Got endpoints: latency-svc-xb857 [1.679334654s]
Jun 27 19:30:32.157: INFO: Created: latency-svc-b4qsx
Jun 27 19:30:32.163: INFO: Got endpoints: latency-svc-b4qsx [1.66823368s]
Jun 27 19:30:32.343: INFO: Created: latency-svc-w2dx6
Jun 27 19:30:32.350: INFO: Got endpoints: latency-svc-w2dx6 [1.707785927s]
Jun 27 19:30:32.407: INFO: Created: latency-svc-vt8rk
Jun 27 19:30:32.419: INFO: Got endpoints: latency-svc-vt8rk [1.620822728s]
Jun 27 19:30:32.537: INFO: Created: latency-svc-kjxsn
Jun 27 19:30:32.552: INFO: Got endpoints: latency-svc-kjxsn [1.679983148s]
Jun 27 19:30:32.622: INFO: Created: latency-svc-8ws5w
Jun 27 19:30:32.629: INFO: Got endpoints: latency-svc-8ws5w [1.601329816s]
Jun 27 19:30:32.775: INFO: Created: latency-svc-mn897
Jun 27 19:30:32.781: INFO: Got endpoints: latency-svc-mn897 [1.701016865s]
Jun 27 19:30:32.848: INFO: Created: latency-svc-dwsvs
Jun 27 19:30:32.850: INFO: Got endpoints: latency-svc-dwsvs [1.612786648s]
Jun 27 19:30:33.053: INFO: Created: latency-svc-xpdln
Jun 27 19:30:33.059: INFO: Got endpoints: latency-svc-xpdln [1.735856482s]
Jun 27 19:30:33.126: INFO: Created: latency-svc-xrtfj
Jun 27 19:30:33.133: INFO: Got endpoints: latency-svc-xrtfj [1.686625681s]
Jun 27 19:30:33.344: INFO: Created: latency-svc-68h4k
Jun 27 19:30:33.344: INFO: Got endpoints: latency-svc-68h4k [1.836091624s]
Jun 27 19:30:33.410: INFO: Created: latency-svc-lg4jj
Jun 27 19:30:33.417: INFO: Got endpoints: latency-svc-lg4jj [1.776454473s]
Jun 27 19:30:33.555: INFO: Created: latency-svc-7wgxz
Jun 27 19:30:33.567: INFO: Got endpoints: latency-svc-7wgxz [1.851773367s]
Jun 27 19:30:33.640: INFO: Created: latency-svc-jfx75
Jun 27 19:30:33.645: INFO: Got endpoints: latency-svc-jfx75 [1.767450036s]
Jun 27 19:30:33.770: INFO: Created: latency-svc-kbwjw
Jun 27 19:30:33.778: INFO: Got endpoints: latency-svc-kbwjw [1.691144992s]
Jun 27 19:30:33.863: INFO: Created: latency-svc-45rkl
Jun 27 19:30:34.018: INFO: Got endpoints: latency-svc-45rkl [1.916308731s]
Jun 27 19:30:34.033: INFO: Created: latency-svc-q9chd
Jun 27 19:30:34.036: INFO: Got endpoints: latency-svc-q9chd [1.872062338s]
Jun 27 19:30:34.112: INFO: Created: latency-svc-hjrbg
Jun 27 19:30:34.113: INFO: Got endpoints: latency-svc-hjrbg [1.762278551s]
Jun 27 19:30:34.251: INFO: Created: latency-svc-fg5q9
Jun 27 19:30:34.257: INFO: Got endpoints: latency-svc-fg5q9 [1.837596494s]
Jun 27 19:30:34.335: INFO: Created: latency-svc-j8456
Jun 27 19:30:34.341: INFO: Got endpoints: latency-svc-j8456 [1.789182211s]
Jun 27 19:30:34.522: INFO: Created: latency-svc-l7qsd
Jun 27 19:30:34.529: INFO: Got endpoints: latency-svc-l7qsd [1.900385261s]
Jun 27 19:30:34.616: INFO: Created: latency-svc-96pgn
Jun 27 19:30:34.616: INFO: Got endpoints: latency-svc-96pgn [1.834974934s]
Jun 27 19:30:34.759: INFO: Created: latency-svc-2rpsh
Jun 27 19:30:34.779: INFO: Got endpoints: latency-svc-2rpsh [1.929518428s]
Jun 27 19:30:34.940: INFO: Created: latency-svc-9v7q2
Jun 27 19:30:34.948: INFO: Got endpoints: latency-svc-9v7q2 [1.888695158s]
Jun 27 19:30:35.184: INFO: Created: latency-svc-wbddj
Jun 27 19:30:35.234: INFO: Got endpoints: latency-svc-wbddj [2.101334231s]
Jun 27 19:30:35.425: INFO: Created: latency-svc-n9k2f
Jun 27 19:30:35.432: INFO: Got endpoints: latency-svc-n9k2f [2.088526976s]
Jun 27 19:30:35.498: INFO: Created: latency-svc-vv6zh
Jun 27 19:30:35.501: INFO: Got endpoints: latency-svc-vv6zh [2.08430443s]
Jun 27 19:30:35.639: INFO: Created: latency-svc-pk6xt
Jun 27 19:30:35.673: INFO: Got endpoints: latency-svc-pk6xt [2.105119984s]
Jun 27 19:30:35.744: INFO: Created: latency-svc-d4gzw
Jun 27 19:30:35.971: INFO: Got endpoints: latency-svc-d4gzw [2.326082005s]
Jun 27 19:30:35.978: INFO: Created: latency-svc-vcmzt
Jun 27 19:30:35.993: INFO: Got endpoints: latency-svc-vcmzt [2.214723592s]
Jun 27 19:30:36.060: INFO: Created: latency-svc-969wm
Jun 27 19:30:36.064: INFO: Got endpoints: latency-svc-969wm [2.045614369s]
Jun 27 19:30:36.186: INFO: Created: latency-svc-z5m8z
Jun 27 19:30:36.191: INFO: Got endpoints: latency-svc-z5m8z [2.155831791s]
Jun 27 19:30:36.407: INFO: Created: latency-svc-rlp9j
Jun 27 19:30:36.410: INFO: Got endpoints: latency-svc-rlp9j [2.296682219s]
Jun 27 19:30:36.468: INFO: Created: latency-svc-kb9j9
Jun 27 19:30:36.472: INFO: Got endpoints: latency-svc-kb9j9 [2.215267945s]
Jun 27 19:30:36.614: INFO: Created: latency-svc-59vvr
Jun 27 19:30:36.616: INFO: Got endpoints: latency-svc-59vvr [2.275172562s]
Jun 27 19:30:36.708: INFO: Created: latency-svc-fr8bv
Jun 27 19:30:36.862: INFO: Got endpoints: latency-svc-fr8bv [2.332758801s]
Jun 27 19:30:36.882: INFO: Created: latency-svc-wbm9j
Jun 27 19:30:36.887: INFO: Got endpoints: latency-svc-wbm9j [2.27140124s]
Jun 27 19:30:37.063: INFO: Created: latency-svc-lxf2t
Jun 27 19:30:37.068: INFO: Got endpoints: latency-svc-lxf2t [2.288782811s]
Jun 27 19:30:37.140: INFO: Created: latency-svc-vgpfj
Jun 27 19:30:37.145: INFO: Got endpoints: latency-svc-vgpfj [2.196685532s]
Jun 27 19:30:37.274: INFO: Created: latency-svc-fxwr4
Jun 27 19:30:37.278: INFO: Got endpoints: latency-svc-fxwr4 [2.043954199s]
Jun 27 19:30:37.344: INFO: Created: latency-svc-wffgm
Jun 27 19:30:37.352: INFO: Got endpoints: latency-svc-wffgm [1.919127682s]
Jun 27 19:30:37.500: INFO: Created: latency-svc-tz27d
Jun 27 19:30:37.507: INFO: Got endpoints: latency-svc-tz27d [2.006280636s]
Jun 27 19:30:37.576: INFO: Created: latency-svc-schzt
Jun 27 19:30:37.578: INFO: Got endpoints: latency-svc-schzt [1.905403008s]
Jun 27 19:30:37.714: INFO: Created: latency-svc-dxnds
Jun 27 19:30:37.723: INFO: Got endpoints: latency-svc-dxnds [1.751826076s]
Jun 27 19:30:37.801: INFO: Created: latency-svc-qdh7h
Jun 27 19:30:37.923: INFO: Got endpoints: latency-svc-qdh7h [1.930167139s]
Jun 27 19:30:37.928: INFO: Created: latency-svc-mbmqf
Jun 27 19:30:37.937: INFO: Got endpoints: latency-svc-mbmqf [1.87272888s]
Jun 27 19:30:37.997: INFO: Created: latency-svc-hj2nl
Jun 27 19:30:38.000: INFO: Got endpoints: latency-svc-hj2nl [1.808641335s]
Jun 27 19:30:38.199: INFO: Created: latency-svc-xmt77
Jun 27 19:30:38.203: INFO: Got endpoints: latency-svc-xmt77 [1.793774304s]
Jun 27 19:30:38.296: INFO: Created: latency-svc-rjrf2
Jun 27 19:30:38.447: INFO: Got endpoints: latency-svc-rjrf2 [1.974893488s]
Jun 27 19:30:38.463: INFO: Created: latency-svc-7qtkv
Jun 27 19:30:38.469: INFO: Got endpoints: latency-svc-7qtkv [1.852619201s]
Jun 27 19:30:38.541: INFO: Created: latency-svc-c2h7f
Jun 27 19:30:38.713: INFO: Got endpoints: latency-svc-c2h7f [1.851074143s]
Jun 27 19:30:38.729: INFO: Created: latency-svc-bvp4g
Jun 27 19:30:38.732: INFO: Got endpoints: latency-svc-bvp4g [1.844062961s]
Jun 27 19:30:38.799: INFO: Created: latency-svc-9bwwl
Jun 27 19:30:38.800: INFO: Got endpoints: latency-svc-9bwwl [1.732180253s]
Jun 27 19:30:38.952: INFO: Created: latency-svc-frzbd
Jun 27 19:30:38.967: INFO: Got endpoints: latency-svc-frzbd [1.82155483s]
Jun 27 19:30:39.042: INFO: Created: latency-svc-6x5zv
Jun 27 19:30:39.230: INFO: Got endpoints: latency-svc-6x5zv [1.951890062s]
Jun 27 19:30:39.238: INFO: Created: latency-svc-8m7mz
Jun 27 19:30:39.248: INFO: Got endpoints: latency-svc-8m7mz [1.895992095s]
Jun 27 19:30:39.433: INFO: Created: latency-svc-4czt4
Jun 27 19:30:39.439: INFO: Got endpoints: latency-svc-4czt4 [1.931487632s]
Jun 27 19:30:39.602: INFO: Created: latency-svc-b9ft6
Jun 27 19:30:39.606: INFO: Got endpoints: latency-svc-b9ft6 [2.028243159s]
Jun 27 19:30:39.889: INFO: Created: latency-svc-7twx5
Jun 27 19:30:39.891: INFO: Got endpoints: latency-svc-7twx5 [2.167929884s]
Jun 27 19:30:40.256: INFO: Created: latency-svc-l9hfm
Jun 27 19:30:40.263: INFO: Got endpoints: latency-svc-l9hfm [2.339762011s]
Jun 27 19:30:40.520: INFO: Created: latency-svc-8kvvx
Jun 27 19:30:40.522: INFO: Got endpoints: latency-svc-8kvvx [2.585211766s]
Jun 27 19:30:41.169: INFO: Created: latency-svc-t56q5
Jun 27 19:30:41.170: INFO: Got endpoints: latency-svc-t56q5 [3.17001728s]
Jun 27 19:30:41.522: INFO: Created: latency-svc-sqp2c
Jun 27 19:30:41.523: INFO: Got endpoints: latency-svc-sqp2c [3.319118772s]
Jun 27 19:30:41.892: INFO: Created: latency-svc-9dd69
Jun 27 19:30:41.897: INFO: Got endpoints: latency-svc-9dd69 [3.449577673s]
Jun 27 19:30:42.121: INFO: Created: latency-svc-t5dtd
Jun 27 19:30:42.127: INFO: Got endpoints: latency-svc-t5dtd [3.65813208s]
Jun 27 19:30:42.424: INFO: Created: latency-svc-xkwfh
Jun 27 19:30:42.429: INFO: Got endpoints: latency-svc-xkwfh [3.715867587s]
Jun 27 19:30:42.506: INFO: Created: latency-svc-t44fp
Jun 27 19:30:42.513: INFO: Got endpoints: latency-svc-t44fp [3.780933335s]
Jun 27 19:30:42.772: INFO: Created: latency-svc-b7fcv
Jun 27 19:30:42.777: INFO: Got endpoints: latency-svc-b7fcv [3.976797781s]
Jun 27 19:30:43.074: INFO: Created: latency-svc-c2db6
Jun 27 19:30:43.087: INFO: Got endpoints: latency-svc-c2db6 [4.120050142s]
Jun 27 19:30:43.440: INFO: Created: latency-svc-zrwxq
Jun 27 19:30:43.450: INFO: Got endpoints: latency-svc-zrwxq [4.219621251s]
Jun 27 19:30:43.733: INFO: Created: latency-svc-b4668
Jun 27 19:30:43.735: INFO: Got endpoints: latency-svc-b4668 [4.487714595s]
Jun 27 19:30:44.081: INFO: Created: latency-svc-g8ngx
Jun 27 19:30:44.085: INFO: Got endpoints: latency-svc-g8ngx [4.646215234s]
Jun 27 19:30:44.300: INFO: Created: latency-svc-njz6v
Jun 27 19:30:44.303: INFO: Got endpoints: latency-svc-njz6v [4.696861226s]
Jun 27 19:30:44.714: INFO: Created: latency-svc-cflwf
Jun 27 19:30:44.717: INFO: Got endpoints: latency-svc-cflwf [4.826020473s]
Jun 27 19:30:45.197: INFO: Created: latency-svc-7mj58
Jun 27 19:30:45.213: INFO: Got endpoints: latency-svc-7mj58 [4.9503826s]
Jun 27 19:30:45.660: INFO: Created: latency-svc-8l7qj
Jun 27 19:30:45.660: INFO: Got endpoints: latency-svc-8l7qj [5.138138847s]
Jun 27 19:30:45.660: INFO: Latencies: [120.795142ms 194.227709ms 254.953314ms 344.39442ms 427.780918ms 707.230581ms 955.389798ms 1.29367734s 1.341077876s 1.532792838s 1.578433236s 1.601329816s 1.612786648s 1.617684358s 1.620528247s 1.620822728s 1.660364772s 1.66823368s 1.679334654s 1.679983148s 1.686625681s 1.687087617s 1.691144992s 1.701016865s 1.707785927s 1.728223461s 1.732180253s 1.7334762s 1.735856482s 1.751826076s 1.755132491s 1.762278551s 1.763852893s 1.766729975s 1.767450036s 1.776454473s 1.789182211s 1.79197308s 1.793774304s 1.795349056s 1.799303554s 1.801874887s 1.807108423s 1.808641335s 1.809897479s 1.810888475s 1.811668413s 1.82155483s 1.826476342s 1.834974934s 1.836091624s 1.837596494s 1.843480978s 1.844062961s 1.845790831s 1.851074143s 1.851773367s 1.852619201s 1.872062338s 1.87272888s 1.88343898s 1.888695158s 1.895992095s 1.900385261s 1.905403008s 1.913978314s 1.916308731s 1.917430752s 1.919127682s 1.923084958s 1.929518428s 1.930167139s 1.931487632s 1.938908879s 1.939510292s 1.943926462s 1.949929985s 1.951890062s 1.95848847s 1.958617019s 1.965813027s 1.96663481s 1.968540519s 1.974893488s 1.975531943s 1.975808831s 1.978005831s 1.979759894s 1.981048629s 1.981898255s 1.989132285s 1.989821553s 1.991358937s 1.993304164s 1.993318577s 1.994015891s 2.006280636s 2.011888717s 2.026568247s 2.027251558s 2.028243159s 2.040054574s 2.043954199s 2.045614369s 2.054804038s 2.077465037s 2.08430443s 2.086935652s 2.087377504s 2.088526976s 2.088966751s 2.092341463s 2.094225829s 2.101334231s 2.105119984s 2.109420204s 2.128563994s 2.133575666s 2.155831791s 2.156255646s 2.167929884s 2.173291713s 2.194333259s 2.196685532s 2.214723592s 2.215267945s 2.22676319s 2.265055681s 2.27140124s 2.275172562s 2.278959693s 2.286229901s 2.288782811s 2.296682219s 2.31745273s 2.326082005s 2.332758801s 2.339762011s 2.373815687s 2.395017315s 2.418611833s 2.435792208s 2.484992134s 2.516316847s 2.516424627s 2.526302288s 2.542255287s 2.542425375s 2.553437781s 2.585211766s 2.599452596s 2.65360229s 2.653772221s 2.664125365s 2.689193661s 2.700846038s 2.719224706s 2.818991806s 2.864630003s 2.875598019s 2.942844399s 2.962963568s 3.06136803s 3.067301784s 3.17001728s 3.181944705s 3.217870436s 3.319118772s 3.351660843s 3.449577673s 3.461516817s 3.496511305s 3.587582093s 3.632146492s 3.65813208s 3.715867587s 3.780933335s 3.853728851s 3.874344088s 3.976797781s 4.011220485s 4.098444861s 4.109592074s 4.114719446s 4.120050142s 4.219621251s 4.267249558s 4.356605196s 4.38786885s 4.487714595s 4.559696431s 4.578790281s 4.614159058s 4.640247693s 4.646215234s 4.696861226s 4.826020473s 4.87114085s 4.9503826s 5.138138847s]
Jun 27 19:30:45.660: INFO: 50 %ile: 2.028243159s
Jun 27 19:30:45.660: INFO: 90 %ile: 4.011220485s
Jun 27 19:30:45.660: INFO: 99 %ile: 4.9503826s
Jun 27 19:30:45.660: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:30:45.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-j28lv" for this suite.
Jun 27 19:31:23.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:31:23.815: INFO: namespace: e2e-tests-svc-latency-j28lv, resource: bindings, ignored listing per whitelist
Jun 27 19:31:23.889: INFO: namespace e2e-tests-svc-latency-j28lv deletion completed in 38.169688271s

• [SLOW TEST:76.543 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:31:23.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-26308a01-9912-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 19:31:24.089: INFO: Waiting up to 5m0s for pod "pod-configmaps-26313f0e-9912-11e9-8fa9-0242ac110005" in namespace "e2e-tests-configmap-6h2hd" to be "success or failure"
Jun 27 19:31:24.096: INFO: Pod "pod-configmaps-26313f0e-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15114ms
Jun 27 19:31:26.102: INFO: Pod "pod-configmaps-26313f0e-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012657947s
Jun 27 19:31:28.107: INFO: Pod "pod-configmaps-26313f0e-9912-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017496784s
STEP: Saw pod success
Jun 27 19:31:28.107: INFO: Pod "pod-configmaps-26313f0e-9912-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:31:28.109: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-26313f0e-9912-11e9-8fa9-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jun 27 19:31:28.228: INFO: Waiting for pod pod-configmaps-26313f0e-9912-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:31:28.231: INFO: Pod pod-configmaps-26313f0e-9912-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:31:28.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6h2hd" for this suite.
Jun 27 19:31:34.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:31:34.525: INFO: namespace: e2e-tests-configmap-6h2hd, resource: bindings, ignored listing per whitelist
Jun 27 19:31:34.542: INFO: namespace e2e-tests-configmap-6h2hd deletion completed in 6.305358563s

• [SLOW TEST:10.653 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:31:34.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jun 27 19:31:34.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-wrdzh'
Jun 27 19:31:37.169: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jun 27 19:31:37.169: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jun 27 19:31:41.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-wrdzh'
Jun 27 19:31:41.405: INFO: stderr: ""
Jun 27 19:31:41.405: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:31:41.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wrdzh" for this suite.
Jun 27 19:31:47.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:31:47.469: INFO: namespace: e2e-tests-kubectl-wrdzh, resource: bindings, ignored listing per whitelist
Jun 27 19:31:47.511: INFO: namespace e2e-tests-kubectl-wrdzh deletion completed in 6.100856589s

• [SLOW TEST:12.969 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:31:47.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 19:31:47.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34420bc2-9912-11e9-8fa9-0242ac110005" in namespace "e2e-tests-downward-api-d968t" to be "success or failure"
Jun 27 19:31:47.686: INFO: Pod "downwardapi-volume-34420bc2-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152677ms
Jun 27 19:31:49.691: INFO: Pod "downwardapi-volume-34420bc2-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013110304s
Jun 27 19:31:51.696: INFO: Pod "downwardapi-volume-34420bc2-9912-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018813456s
STEP: Saw pod success
Jun 27 19:31:51.696: INFO: Pod "downwardapi-volume-34420bc2-9912-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:31:51.701: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-34420bc2-9912-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 19:31:51.783: INFO: Waiting for pod downwardapi-volume-34420bc2-9912-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:31:51.793: INFO: Pod downwardapi-volume-34420bc2-9912-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:31:51.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-d968t" for this suite.
Jun 27 19:31:57.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:31:57.871: INFO: namespace: e2e-tests-downward-api-d968t, resource: bindings, ignored listing per whitelist
Jun 27 19:31:57.946: INFO: namespace e2e-tests-downward-api-d968t deletion completed in 6.143559354s

• [SLOW TEST:10.436 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:31:57.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jun 27 19:32:02.661: INFO: Successfully updated pod "annotationupdate3a71b46f-9912-11e9-8fa9-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:32:04.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wmkwr" for this suite.
Jun 27 19:32:26.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:32:26.845: INFO: namespace: e2e-tests-projected-wmkwr, resource: bindings, ignored listing per whitelist
Jun 27 19:32:26.866: INFO: namespace e2e-tests-projected-wmkwr deletion completed in 22.143497463s

• [SLOW TEST:28.920 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:32:26.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-4bade621-9912-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 19:32:27.017: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4baf0045-9912-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-2gf2c" to be "success or failure"
Jun 27 19:32:27.031: INFO: Pod "pod-projected-configmaps-4baf0045-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.431783ms
Jun 27 19:32:29.035: INFO: Pod "pod-projected-configmaps-4baf0045-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018632698s
Jun 27 19:32:31.039: INFO: Pod "pod-projected-configmaps-4baf0045-9912-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022530052s
STEP: Saw pod success
Jun 27 19:32:31.039: INFO: Pod "pod-projected-configmaps-4baf0045-9912-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:32:31.041: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-4baf0045-9912-11e9-8fa9-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jun 27 19:32:31.065: INFO: Waiting for pod pod-projected-configmaps-4baf0045-9912-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:32:31.070: INFO: Pod pod-projected-configmaps-4baf0045-9912-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:32:31.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2gf2c" for this suite.
Jun 27 19:32:37.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:32:37.162: INFO: namespace: e2e-tests-projected-2gf2c, resource: bindings, ignored listing per whitelist
Jun 27 19:32:37.240: INFO: namespace e2e-tests-projected-2gf2c deletion completed in 6.16239064s

• [SLOW TEST:10.373 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:32:37.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-449r
STEP: Creating a pod to test atomic-volume-subpath
Jun 27 19:32:37.592: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-449r" in namespace "e2e-tests-subpath-z7w6p" to be "success or failure"
Jun 27 19:32:37.625: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Pending", Reason="", readiness=false. Elapsed: 33.228444ms
Jun 27 19:32:39.683: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090394435s
Jun 27 19:32:41.686: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093629683s
Jun 27 19:32:43.691: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Running", Reason="", readiness=false. Elapsed: 6.099257774s
Jun 27 19:32:45.696: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Running", Reason="", readiness=false. Elapsed: 8.103577449s
Jun 27 19:32:47.702: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Running", Reason="", readiness=false. Elapsed: 10.109493187s
Jun 27 19:32:49.718: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Running", Reason="", readiness=false. Elapsed: 12.126182091s
Jun 27 19:32:51.725: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Running", Reason="", readiness=false. Elapsed: 14.132633411s
Jun 27 19:32:53.730: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Running", Reason="", readiness=false. Elapsed: 16.137603386s
Jun 27 19:32:55.734: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Running", Reason="", readiness=false. Elapsed: 18.141801194s
Jun 27 19:32:57.739: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Running", Reason="", readiness=false. Elapsed: 20.146914343s
Jun 27 19:32:59.747: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Running", Reason="", readiness=false. Elapsed: 22.154332037s
Jun 27 19:33:01.785: INFO: Pod "pod-subpath-test-configmap-449r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.192659801s
STEP: Saw pod success
Jun 27 19:33:01.785: INFO: Pod "pod-subpath-test-configmap-449r" satisfied condition "success or failure"
Jun 27 19:33:01.790: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-subpath-test-configmap-449r container test-container-subpath-configmap-449r: 
STEP: delete the pod
Jun 27 19:33:01.834: INFO: Waiting for pod pod-subpath-test-configmap-449r to disappear
Jun 27 19:33:01.846: INFO: Pod pod-subpath-test-configmap-449r no longer exists
STEP: Deleting pod pod-subpath-test-configmap-449r
Jun 27 19:33:01.846: INFO: Deleting pod "pod-subpath-test-configmap-449r" in namespace "e2e-tests-subpath-z7w6p"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:33:01.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-z7w6p" for this suite.
Jun 27 19:33:07.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:33:07.984: INFO: namespace: e2e-tests-subpath-z7w6p, resource: bindings, ignored listing per whitelist
Jun 27 19:33:08.045: INFO: namespace e2e-tests-subpath-z7w6p deletion completed in 6.193584909s

• [SLOW TEST:30.804 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:33:08.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jun 27 19:33:08.143: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64369457-9912-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-cmzvg" to be "success or failure"
Jun 27 19:33:08.151: INFO: Pod "downwardapi-volume-64369457-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.958523ms
Jun 27 19:33:10.154: INFO: Pod "downwardapi-volume-64369457-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01168923s
Jun 27 19:33:12.167: INFO: Pod "downwardapi-volume-64369457-9912-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024034226s
STEP: Saw pod success
Jun 27 19:33:12.167: INFO: Pod "downwardapi-volume-64369457-9912-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:33:12.169: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod downwardapi-volume-64369457-9912-11e9-8fa9-0242ac110005 container client-container: 
STEP: delete the pod
Jun 27 19:33:12.202: INFO: Waiting for pod downwardapi-volume-64369457-9912-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:33:12.207: INFO: Pod downwardapi-volume-64369457-9912-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:33:12.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cmzvg" for this suite.
Jun 27 19:33:18.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:33:18.332: INFO: namespace: e2e-tests-projected-cmzvg, resource: bindings, ignored listing per whitelist
Jun 27 19:33:18.350: INFO: namespace e2e-tests-projected-cmzvg deletion completed in 6.139960938s

• [SLOW TEST:10.306 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:33:18.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jun 27 19:33:18.432: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-ppmrl" to be "success or failure"
Jun 27 19:33:18.444: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.164031ms
Jun 27 19:33:20.449: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016659208s
Jun 27 19:33:22.457: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024881431s
STEP: Saw pod success
Jun 27 19:33:22.457: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jun 27 19:33:22.462: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jun 27 19:33:22.837: INFO: Waiting for pod pod-host-path-test to disappear
Jun 27 19:33:22.929: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:33:22.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-ppmrl" for this suite.
Jun 27 19:33:29.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:33:29.119: INFO: namespace: e2e-tests-hostpath-ppmrl, resource: bindings, ignored listing per whitelist
Jun 27 19:33:29.148: INFO: namespace e2e-tests-hostpath-ppmrl deletion completed in 6.147106826s

• [SLOW TEST:10.797 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:33:29.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jun 27 19:33:29.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k6mp9'
Jun 27 19:33:29.344: INFO: stderr: ""
Jun 27 19:33:29.344: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jun 27 19:33:34.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k6mp9 -o json'
Jun 27 19:33:34.477: INFO: stderr: ""
Jun 27 19:33:34.477: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-06-27T19:33:29Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-k6mp9\",\n        \"resourceVersion\": \"1383587\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-k6mp9/pods/e2e-test-nginx-pod\",\n        \"uid\": \"70d9ca3e-9912-11e9-a678-fa163e0cec1d\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-wbddp\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-x6tdbol33slm\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-wbddp\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-wbddp\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-06-27T19:33:29Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-06-27T19:33:33Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-06-27T19:33:33Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-06-27T19:33:29Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://515881708a5854ef4c454190a37f505cbd4574477063861858ce57fb7c2fd20c\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-06-27T19:33:31Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"192.168.100.12\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-06-27T19:33:29Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jun 27 19:33:34.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-k6mp9'
Jun 27 19:33:34.709: INFO: stderr: ""
Jun 27 19:33:34.709: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jun 27 19:33:34.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k6mp9'
Jun 27 19:33:45.736: INFO: stderr: ""
Jun 27 19:33:45.736: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:33:45.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k6mp9" for this suite.
Jun 27 19:33:51.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:33:51.826: INFO: namespace: e2e-tests-kubectl-k6mp9, resource: bindings, ignored listing per whitelist
Jun 27 19:33:51.876: INFO: namespace e2e-tests-kubectl-k6mp9 deletion completed in 6.133988077s

• [SLOW TEST:22.728 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:33:51.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jun 27 19:33:52.565: INFO: Waiting up to 5m0s for pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-25twr" in namespace "e2e-tests-svcaccounts-xtcvh" to be "success or failure"
Jun 27 19:33:52.573: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-25twr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169611ms
Jun 27 19:33:54.582: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-25twr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017216011s
Jun 27 19:33:56.602: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-25twr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036790148s
Jun 27 19:33:58.608: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-25twr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043177862s
STEP: Saw pod success
Jun 27 19:33:58.608: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-25twr" satisfied condition "success or failure"
Jun 27 19:33:58.613: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-25twr container token-test: 
STEP: delete the pod
Jun 27 19:33:58.719: INFO: Waiting for pod pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-25twr to disappear
Jun 27 19:33:58.725: INFO: Pod pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-25twr no longer exists
STEP: Creating a pod to test consume service account root CA
Jun 27 19:33:58.732: INFO: Waiting up to 5m0s for pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-5g4r5" in namespace "e2e-tests-svcaccounts-xtcvh" to be "success or failure"
Jun 27 19:33:58.738: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-5g4r5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578191ms
Jun 27 19:34:00.743: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-5g4r5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011015129s
Jun 27 19:34:02.747: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-5g4r5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014974318s
Jun 27 19:34:04.751: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-5g4r5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019204363s
STEP: Saw pod success
Jun 27 19:34:04.751: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-5g4r5" satisfied condition "success or failure"
Jun 27 19:34:04.755: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-5g4r5 container root-ca-test: 
STEP: delete the pod
Jun 27 19:34:04.798: INFO: Waiting for pod pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-5g4r5 to disappear
Jun 27 19:34:04.802: INFO: Pod pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-5g4r5 no longer exists
STEP: Creating a pod to test consume service account namespace
Jun 27 19:34:04.807: INFO: Waiting up to 5m0s for pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-q2jwf" in namespace "e2e-tests-svcaccounts-xtcvh" to be "success or failure"
Jun 27 19:34:04.846: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-q2jwf": Phase="Pending", Reason="", readiness=false. Elapsed: 39.666467ms
Jun 27 19:34:06.992: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-q2jwf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185533477s
Jun 27 19:34:08.996: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-q2jwf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189556439s
Jun 27 19:34:11.002: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-q2jwf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.194812112s
STEP: Saw pod success
Jun 27 19:34:11.002: INFO: Pod "pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-q2jwf" satisfied condition "success or failure"
Jun 27 19:34:11.005: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-q2jwf container namespace-test: 
STEP: delete the pod
Jun 27 19:34:11.056: INFO: Waiting for pod pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-q2jwf to disappear
Jun 27 19:34:11.061: INFO: Pod pod-service-account-7eb1ea5f-9912-11e9-8fa9-0242ac110005-q2jwf no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:34:11.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-xtcvh" for this suite.
Jun 27 19:34:17.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:34:17.150: INFO: namespace: e2e-tests-svcaccounts-xtcvh, resource: bindings, ignored listing per whitelist
Jun 27 19:34:17.157: INFO: namespace e2e-tests-svcaccounts-xtcvh deletion completed in 6.08981303s

• [SLOW TEST:25.280 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:34:17.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-8d664417-9912-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 19:34:17.238: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d66cd8d-9912-11e9-8fa9-0242ac110005" in namespace "e2e-tests-configmap-7rvgf" to be "success or failure"
Jun 27 19:34:17.257: INFO: Pod "pod-configmaps-8d66cd8d-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.8227ms
Jun 27 19:34:19.262: INFO: Pod "pod-configmaps-8d66cd8d-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024142177s
Jun 27 19:34:21.270: INFO: Pod "pod-configmaps-8d66cd8d-9912-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031814321s
STEP: Saw pod success
Jun 27 19:34:21.270: INFO: Pod "pod-configmaps-8d66cd8d-9912-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:34:21.278: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-8d66cd8d-9912-11e9-8fa9-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jun 27 19:34:21.332: INFO: Waiting for pod pod-configmaps-8d66cd8d-9912-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:34:21.346: INFO: Pod pod-configmaps-8d66cd8d-9912-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:34:21.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7rvgf" for this suite.
Jun 27 19:34:27.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:34:27.425: INFO: namespace: e2e-tests-configmap-7rvgf, resource: bindings, ignored listing per whitelist
Jun 27 19:34:27.500: INFO: namespace e2e-tests-configmap-7rvgf deletion completed in 6.136543155s

• [SLOW TEST:10.343 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:34:27.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-qk4s
STEP: Creating a pod to test atomic-volume-subpath
Jun 27 19:34:27.677: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qk4s" in namespace "e2e-tests-subpath-wdhjz" to be "success or failure"
Jun 27 19:34:27.698: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Pending", Reason="", readiness=false. Elapsed: 20.555329ms
Jun 27 19:34:29.705: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028356138s
Jun 27 19:34:31.710: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032963892s
Jun 27 19:34:33.720: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Running", Reason="", readiness=false. Elapsed: 6.04258721s
Jun 27 19:34:35.725: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Running", Reason="", readiness=false. Elapsed: 8.048437965s
Jun 27 19:34:37.733: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Running", Reason="", readiness=false. Elapsed: 10.055480846s
Jun 27 19:34:39.738: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Running", Reason="", readiness=false. Elapsed: 12.061283296s
Jun 27 19:34:41.744: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Running", Reason="", readiness=false. Elapsed: 14.066893699s
Jun 27 19:34:43.750: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Running", Reason="", readiness=false. Elapsed: 16.072480732s
Jun 27 19:34:45.754: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Running", Reason="", readiness=false. Elapsed: 18.076886368s
Jun 27 19:34:47.759: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Running", Reason="", readiness=false. Elapsed: 20.082070265s
Jun 27 19:34:49.765: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Running", Reason="", readiness=false. Elapsed: 22.087782024s
Jun 27 19:34:51.771: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Running", Reason="", readiness=false. Elapsed: 24.093710012s
Jun 27 19:34:53.780: INFO: Pod "pod-subpath-test-downwardapi-qk4s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.102535737s
STEP: Saw pod success
Jun 27 19:34:53.780: INFO: Pod "pod-subpath-test-downwardapi-qk4s" satisfied condition "success or failure"
Jun 27 19:34:53.785: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-subpath-test-downwardapi-qk4s container test-container-subpath-downwardapi-qk4s: 
STEP: delete the pod
Jun 27 19:34:53.905: INFO: Waiting for pod pod-subpath-test-downwardapi-qk4s to disappear
Jun 27 19:34:53.914: INFO: Pod pod-subpath-test-downwardapi-qk4s no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-qk4s
Jun 27 19:34:53.914: INFO: Deleting pod "pod-subpath-test-downwardapi-qk4s" in namespace "e2e-tests-subpath-wdhjz"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:34:53.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-wdhjz" for this suite.
Jun 27 19:34:59.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:35:00.002: INFO: namespace: e2e-tests-subpath-wdhjz, resource: bindings, ignored listing per whitelist
Jun 27 19:35:00.126: INFO: namespace e2e-tests-subpath-wdhjz deletion completed in 6.202562353s

• [SLOW TEST:32.626 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:35:00.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jun 27 19:35:00.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-kf56d'
Jun 27 19:35:00.327: INFO: stderr: ""
Jun 27 19:35:00.327: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jun 27 19:35:00.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-kf56d'
Jun 27 19:35:15.719: INFO: stderr: ""
Jun 27 19:35:15.719: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:35:15.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kf56d" for this suite.
Jun 27 19:35:21.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:35:21.874: INFO: namespace: e2e-tests-kubectl-kf56d, resource: bindings, ignored listing per whitelist
Jun 27 19:35:21.906: INFO: namespace e2e-tests-kubectl-kf56d deletion completed in 6.178396242s

• [SLOW TEST:21.780 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:35:21.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-b4051c26-9912-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 19:35:22.034: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b405bda3-9912-11e9-8fa9-0242ac110005" in namespace "e2e-tests-projected-br4jt" to be "success or failure"
Jun 27 19:35:22.052: INFO: Pod "pod-projected-configmaps-b405bda3-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.098684ms
Jun 27 19:35:24.115: INFO: Pod "pod-projected-configmaps-b405bda3-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081108087s
Jun 27 19:35:26.120: INFO: Pod "pod-projected-configmaps-b405bda3-9912-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08588987s
STEP: Saw pod success
Jun 27 19:35:26.120: INFO: Pod "pod-projected-configmaps-b405bda3-9912-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:35:26.124: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-projected-configmaps-b405bda3-9912-11e9-8fa9-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jun 27 19:35:26.171: INFO: Waiting for pod pod-projected-configmaps-b405bda3-9912-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:35:26.174: INFO: Pod pod-projected-configmaps-b405bda3-9912-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:35:26.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-br4jt" for this suite.
Jun 27 19:35:32.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:35:32.289: INFO: namespace: e2e-tests-projected-br4jt, resource: bindings, ignored listing per whitelist
Jun 27 19:35:32.333: INFO: namespace e2e-tests-projected-br4jt deletion completed in 6.155105837s

• [SLOW TEST:10.427 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:35:32.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-ba56a485-9912-11e9-8fa9-0242ac110005
STEP: Creating a pod to test consume configMaps
Jun 27 19:35:32.644: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba589ecc-9912-11e9-8fa9-0242ac110005" in namespace "e2e-tests-configmap-g2h6j" to be "success or failure"
Jun 27 19:35:32.720: INFO: Pod "pod-configmaps-ba589ecc-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 75.394947ms
Jun 27 19:35:34.724: INFO: Pod "pod-configmaps-ba589ecc-9912-11e9-8fa9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079443194s
Jun 27 19:35:36.730: INFO: Pod "pod-configmaps-ba589ecc-9912-11e9-8fa9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085649439s
STEP: Saw pod success
Jun 27 19:35:36.730: INFO: Pod "pod-configmaps-ba589ecc-9912-11e9-8fa9-0242ac110005" satisfied condition "success or failure"
Jun 27 19:35:36.734: INFO: Trying to get logs from node hunter-server-x6tdbol33slm pod pod-configmaps-ba589ecc-9912-11e9-8fa9-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jun 27 19:35:36.780: INFO: Waiting for pod pod-configmaps-ba589ecc-9912-11e9-8fa9-0242ac110005 to disappear
Jun 27 19:35:36.789: INFO: Pod pod-configmaps-ba589ecc-9912-11e9-8fa9-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:35:36.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-g2h6j" for this suite.
Jun 27 19:35:42.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:35:42.845: INFO: namespace: e2e-tests-configmap-g2h6j, resource: bindings, ignored listing per whitelist
Jun 27 19:35:42.892: INFO: namespace e2e-tests-configmap-g2h6j deletion completed in 6.098817201s

• [SLOW TEST:10.559 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jun 27 19:35:42.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jun 27 19:38:29.441: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:29.459: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:31.459: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:31.464: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:33.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:33.490: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:35.459: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:35.472: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:37.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:37.465: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:39.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:39.466: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:41.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:41.498: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:43.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:43.466: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:45.459: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:45.463: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:47.459: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:47.463: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:49.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:49.466: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:51.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:51.466: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:53.460: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:53.465: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:55.459: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:55.465: INFO: Pod pod-with-poststart-exec-hook still exists
Jun 27 19:38:57.459: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun 27 19:38:57.464: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jun 27 19:38:57.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7sc5d" for this suite.
Jun 27 19:39:19.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 27 19:39:19.510: INFO: namespace: e2e-tests-container-lifecycle-hook-7sc5d, resource: bindings, ignored listing per whitelist
Jun 27 19:39:19.597: INFO: namespace e2e-tests-container-lifecycle-hook-7sc5d deletion completed in 22.129858241s

• [SLOW TEST:216.705 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSJun 27 19:39:19.598: INFO: Running AfterSuite actions on all nodes
Jun 27 19:39:19.598: INFO: Running AfterSuite actions on node 1
Jun 27 19:39:19.598: INFO: Skipping dumping logs from cluster

Ran 200 of 2162 Specs in 6611.919 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1962 Skipped PASS