I0111 10:47:08.765294 9 e2e.go:224] Starting e2e run "b6dcc63c-345f-11ea-b0bd-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578739628 - Will randomize all specs Will run 201 of 2164 specs Jan 11 10:47:08.932: INFO: >>> kubeConfig: /root/.kube/config Jan 11 10:47:08.935: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 11 10:47:08.961: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 11 10:47:08.999: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 11 10:47:08.999: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 11 10:47:08.999: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 11 10:47:09.008: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 11 10:47:09.008: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 11 10:47:09.008: INFO: e2e test version: v1.13.12 Jan 11 10:47:09.009: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:47:09.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Jan 11 10:47:09.224: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 11 10:47:19.980: INFO: Successfully updated pod "labelsupdateb76f2bff-345f-11ea-b0bd-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:47:22.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xcpd8" for this suite. Jan 11 10:47:46.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:47:46.161: INFO: namespace: e2e-tests-downward-api-xcpd8, resource: bindings, ignored listing per whitelist Jan 11 10:47:46.284: INFO: namespace e2e-tests-downward-api-xcpd8 deletion completed in 24.211004268s • [SLOW TEST:37.275 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:47:46.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 11 10:47:46.471: INFO: Creating deployment "nginx-deployment" Jan 11 10:47:46.516: INFO: Waiting for observed generation 1 Jan 11 10:47:48.935: INFO: Waiting for all required pods to come up Jan 11 10:47:50.463: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 11 10:48:27.239: INFO: Waiting for deployment "nginx-deployment" to complete Jan 11 10:48:27.254: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 11 10:48:27.268: INFO: Updating deployment nginx-deployment Jan 11 10:48:27.268: INFO: Waiting for observed generation 2 Jan 11 10:48:30.142: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 11 10:48:30.151: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 11 10:48:30.160: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 11 10:48:30.624: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 11 10:48:30.625: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 11 10:48:30.631: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 11 10:48:31.112: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 11 10:48:31.112: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 11 10:48:31.141: INFO: Updating deployment nginx-deployment Jan 11 10:48:31.141: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 11 10:48:32.609: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 11 10:48:38.419: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 11 10:48:38.963: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p4tsw/deployments/nginx-deployment,UID:cda55b01-345f-11ea-a994-fa163e34d433,ResourceVersion:17911369,Generation:3,CreationTimestamp:2020-01-11 10:47:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-11 10:48:32 +0000 UTC 2020-01-11 10:48:32 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-11 10:48:38 +0000 UTC 2020-01-11 10:47:46 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 11 10:48:40.137: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p4tsw/replicasets/nginx-deployment-5c98f8fb5,UID:e5f4dd8d-345f-11ea-a994-fa163e34d433,ResourceVersion:17911361,Generation:3,CreationTimestamp:2020-01-11 10:48:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment cda55b01-345f-11ea-a994-fa163e34d433 0xc000b6f177 0xc000b6f178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 11 10:48:40.137: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 11 10:48:40.137: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p4tsw/replicasets/nginx-deployment-85ddf47c5d,UID:cdbef306-345f-11ea-a994-fa163e34d433,ResourceVersion:17911362,Generation:3,CreationTimestamp:2020-01-11 10:47:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment cda55b01-345f-11ea-a994-fa163e34d433 0xc000b6f237 0xc000b6f238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 11 10:48:41.194: INFO: Pod "nginx-deployment-5c98f8fb5-5xw9f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5xw9f,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-5xw9f,UID:e5f92bdc-345f-11ea-a994-fa163e34d433,ResourceVersion:17911275,Generation:0,CreationTimestamp:2020-01-11 10:48:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc000b6fc57 0xc000b6fc58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b6fcc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b6fce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-11 10:48:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.194: INFO: Pod "nginx-deployment-5c98f8fb5-89h4s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-89h4s,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-89h4s,UID:e5fdf53b-345f-11ea-a994-fa163e34d433,ResourceVersion:17911290,Generation:0,CreationTimestamp:2020-01-11 10:48:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc000b6fda7 0xc000b6fda8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b6fe10} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b6fe30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-11 10:48:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.195: INFO: Pod "nginx-deployment-5c98f8fb5-gfksz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gfksz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-gfksz,UID:e9542369-345f-11ea-a994-fa163e34d433,ResourceVersion:17911324,Generation:0,CreationTimestamp:2020-01-11 10:48:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc000b6ff07 0xc000b6ff08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b6ff70} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b6ff90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.195: INFO: Pod "nginx-deployment-5c98f8fb5-hbsb8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hbsb8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-hbsb8,UID:e91f4bfb-345f-11ea-a994-fa163e34d433,ResourceVersion:17911364,Generation:0,CreationTimestamp:2020-01-11 10:48:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc00159c017 0xc00159c018}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159c080} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159c0a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-11 10:48:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.195: INFO: Pod "nginx-deployment-5c98f8fb5-kcffd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kcffd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-kcffd,UID:e9a25f2d-345f-11ea-a994-fa163e34d433,ResourceVersion:17911336,Generation:0,CreationTimestamp:2020-01-11 10:48:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc00159c167 0xc00159c168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159c1d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159c1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.195: INFO: Pod "nginx-deployment-5c98f8fb5-m4x97" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-m4x97,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-m4x97,UID:eb3cf621-345f-11ea-a994-fa163e34d433,ResourceVersion:17911359,Generation:0,CreationTimestamp:2020-01-11 10:48:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc00159c267 0xc00159c268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159c2d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159c2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.196: INFO: Pod "nginx-deployment-5c98f8fb5-mbkmj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mbkmj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-mbkmj,UID:e9a3bb20-345f-11ea-a994-fa163e34d433,ResourceVersion:17911341,Generation:0,CreationTimestamp:2020-01-11 10:48:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc00159c367 0xc00159c368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159c3d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159c3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.196: INFO: Pod "nginx-deployment-5c98f8fb5-n9bmn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n9bmn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-n9bmn,UID:e9a36c20-345f-11ea-a994-fa163e34d433,ResourceVersion:17911340,Generation:0,CreationTimestamp:2020-01-11 10:48:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc00159c467 0xc00159c468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159c4d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159c4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.196: INFO: Pod "nginx-deployment-5c98f8fb5-rtw5c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rtw5c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-rtw5c,UID:e9548729-345f-11ea-a994-fa163e34d433,ResourceVersion:17911322,Generation:0,CreationTimestamp:2020-01-11 10:48:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc00159c567 0xc00159c568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159c5f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159c610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.196: INFO: Pod "nginx-deployment-5c98f8fb5-tsvvf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tsvvf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-tsvvf,UID:e639872a-345f-11ea-a994-fa163e34d433,ResourceVersion:17911310,Generation:0,CreationTimestamp:2020-01-11 10:48:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc00159c687 0xc00159c688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159c6f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159c710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-11 10:48:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.196: INFO: Pod "nginx-deployment-5c98f8fb5-vsc74" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vsc74,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-vsc74,UID:e5fdee9e-345f-11ea-a994-fa163e34d433,ResourceVersion:17911286,Generation:0,CreationTimestamp:2020-01-11 10:48:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc00159c7e7 0xc00159c7e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159c850} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159c870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-11 10:48:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.197: INFO: Pod "nginx-deployment-5c98f8fb5-wm6d2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wm6d2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-wm6d2,UID:e634c9a1-345f-11ea-a994-fa163e34d433,ResourceVersion:17911292,Generation:0,CreationTimestamp:2020-01-11 10:48:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc00159c977 0xc00159c978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159ca10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159ca40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:27 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-11 10:48:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.197: INFO: Pod "nginx-deployment-5c98f8fb5-z6lmg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z6lmg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-5c98f8fb5-z6lmg,UID:e9a39790-345f-11ea-a994-fa163e34d433,ResourceVersion:17911347,Generation:0,CreationTimestamp:2020-01-11 10:48:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e5f4dd8d-345f-11ea-a994-fa163e34d433 0xc00159cb87 0xc00159cb88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159cc20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159cc40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.197: INFO: Pod "nginx-deployment-85ddf47c5d-2th2x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2th2x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-2th2x,UID:cdd6c01d-345f-11ea-a994-fa163e34d433,ResourceVersion:17911182,Generation:0,CreationTimestamp:2020-01-11 10:47:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159ccb7 0xc00159ccb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159cd30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159cd60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-11 10:47:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-11 10:48:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3b159c37ecf2612311d0cb8f4dec44ad4896dfd55ab8a7c911efd77958158eaa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.197: INFO: Pod "nginx-deployment-85ddf47c5d-466c6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-466c6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-466c6,UID:cde7f573-345f-11ea-a994-fa163e34d433,ResourceVersion:17911228,Generation:0,CreationTimestamp:2020-01-11 10:47:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159ce37 0xc00159ce38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159cea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159cec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-11 10:47:49 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-11 10:48:22 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://86e1acad45a49bd35b4e9b9b06840f8b554bb9cb53186dd048a31154af360e42}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.197: INFO: Pod "nginx-deployment-85ddf47c5d-4wpwv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4wpwv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-4wpwv,UID:e95565de-345f-11ea-a994-fa163e34d433,ResourceVersion:17911321,Generation:0,CreationTimestamp:2020-01-11 10:48:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159cf87 0xc00159cf88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159d000} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159d020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.197: INFO: Pod "nginx-deployment-85ddf47c5d-5xd98" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5xd98,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-5xd98,UID:cdcede92-345f-11ea-a994-fa163e34d433,ResourceVersion:17911175,Generation:0,CreationTimestamp:2020-01-11 10:47:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159d097 0xc00159d098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159d100} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159d120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-11 10:47:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-11 10:48:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://90b7978909fde8a2b452d8d78fd2cd9c9e03e29e2aaba00a64f98d4e6a3d4627}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.198: INFO: Pod "nginx-deployment-85ddf47c5d-78pdd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-78pdd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-78pdd,UID:cdd66f0f-345f-11ea-a994-fa163e34d433,ResourceVersion:17911224,Generation:0,CreationTimestamp:2020-01-11 10:47:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159d1e7 0xc00159d1e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159d250} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159d280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-11 10:47:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-11 10:48:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cc3a9a058db0c7dad01d4036de99173c0e3d04f81d901abd4af09974ddb4bd03}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.198: INFO: Pod "nginx-deployment-85ddf47c5d-7x5n8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7x5n8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-7x5n8,UID:e954ca5c-345f-11ea-a994-fa163e34d433,ResourceVersion:17911323,Generation:0,CreationTimestamp:2020-01-11 10:48:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159d347 0xc00159d348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159d3b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159d3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.198: INFO: Pod "nginx-deployment-85ddf47c5d-9gmjr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9gmjr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-9gmjr,UID:e9a319f7-345f-11ea-a994-fa163e34d433,ResourceVersion:17911346,Generation:0,CreationTimestamp:2020-01-11 10:48:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159d457 0xc00159d458}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159d4c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159d4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.198: INFO: Pod "nginx-deployment-85ddf47c5d-9lqtp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9lqtp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-9lqtp,UID:e897e5b0-345f-11ea-a994-fa163e34d433,ResourceVersion:17911349,Generation:0,CreationTimestamp:2020-01-11 10:48:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159d557 0xc00159d558}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159d5d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159d5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-11 10:48:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.198: INFO: Pod "nginx-deployment-85ddf47c5d-crrmn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-crrmn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-crrmn,UID:e9a30208-345f-11ea-a994-fa163e34d433,ResourceVersion:17911342,Generation:0,CreationTimestamp:2020-01-11 10:48:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159d6a7 0xc00159d6a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159d710} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159d730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.199: INFO: Pod "nginx-deployment-85ddf47c5d-cx2br" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cx2br,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-cx2br,UID:e91f921b-345f-11ea-a994-fa163e34d433,ResourceVersion:17911372,Generation:0,CreationTimestamp:2020-01-11 10:48:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159d7a7 0xc00159d7a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159d810} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159d830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-11 10:48:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.199: INFO: Pod "nginx-deployment-85ddf47c5d-gfvfh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gfvfh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-gfvfh,UID:e9a342e5-345f-11ea-a994-fa163e34d433,ResourceVersion:17911345,Generation:0,CreationTimestamp:2020-01-11 10:48:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159d8f7 0xc00159d8f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159d960} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159d980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.199: INFO: Pod "nginx-deployment-85ddf47c5d-hzvhh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hzvhh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-hzvhh,UID:cde81202-345f-11ea-a994-fa163e34d433,ResourceVersion:17911219,Generation:0,CreationTimestamp:2020-01-11 10:47:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159da07 0xc00159da08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159da70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159da90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-11 10:47:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-11 10:48:22 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6023380833a6b8c65b29536b91ddc8189d13777d1f2dc2482308275105835fb2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.199: INFO: Pod "nginx-deployment-85ddf47c5d-kbgxs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kbgxs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-kbgxs,UID:cdd699e0-345f-11ea-a994-fa163e34d433,ResourceVersion:17911237,Generation:0,CreationTimestamp:2020-01-11 10:47:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159db57 0xc00159db58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159dbc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159dbe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-11 10:47:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-11 10:48:19 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0d043a063834cb4f85a34a4ec97e57abedaf5f6f7434a134153659c723910d05}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.199: INFO: Pod "nginx-deployment-85ddf47c5d-lmlcj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lmlcj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-lmlcj,UID:e9546504-345f-11ea-a994-fa163e34d433,ResourceVersion:17911326,Generation:0,CreationTimestamp:2020-01-11 10:48:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159dca7 0xc00159dca8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159dd10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159dd30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.199: INFO: Pod "nginx-deployment-85ddf47c5d-mldcv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mldcv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-mldcv,UID:e91f74c3-345f-11ea-a994-fa163e34d433,ResourceVersion:17911377,Generation:0,CreationTimestamp:2020-01-11 10:48:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159dda7 0xc00159dda8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159de10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159de30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-11 10:48:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.200: INFO: Pod "nginx-deployment-85ddf47c5d-mrwpg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mrwpg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-mrwpg,UID:e9a358f7-345f-11ea-a994-fa163e34d433,ResourceVersion:17911343,Generation:0,CreationTimestamp:2020-01-11 10:48:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159dee7 0xc00159dee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00159df50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00159df70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.200: INFO: Pod "nginx-deployment-85ddf47c5d-ms8hb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ms8hb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-ms8hb,UID:e955203f-345f-11ea-a994-fa163e34d433,ResourceVersion:17911325,Generation:0,CreationTimestamp:2020-01-11 10:48:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc00159dfe7 0xc00159dfe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019e0050} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019e0070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.200: INFO: Pod "nginx-deployment-85ddf47c5d-nxpnm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nxpnm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-nxpnm,UID:cdce741a-345f-11ea-a994-fa163e34d433,ResourceVersion:17911211,Generation:0,CreationTimestamp:2020-01-11 10:47:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc0019e00e7 0xc0019e00e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019e0150} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019e0170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-11 10:47:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-11 10:48:21 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4d36c50c8b07bf9167de986a1a68a4039a50c73d78b629f947677cb522c037be}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.200: INFO: Pod "nginx-deployment-85ddf47c5d-t7q25" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t7q25,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-t7q25,UID:cde813a7-345f-11ea-a994-fa163e34d433,ResourceVersion:17911214,Generation:0,CreationTimestamp:2020-01-11 10:47:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc0019e0237 0xc0019e0238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019e02a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019e02c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:47:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-11 10:47:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-11 10:48:21 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9db6149b67a7beac48dfe7a0953b7aaf474e5ffbd9eea09b16f8ee293121075a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 11 10:48:41.200: INFO: Pod "nginx-deployment-85ddf47c5d-v5qdf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v5qdf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p4tsw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p4tsw/pods/nginx-deployment-85ddf47c5d-v5qdf,UID:e9a32ca7-345f-11ea-a994-fa163e34d433,ResourceVersion:17911344,Generation:0,CreationTimestamp:2020-01-11 10:48:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cdbef306-345f-11ea-a994-fa163e34d433 0xc0019e0387 0xc0019e0388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fvv6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fvv6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9fvv6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019e03f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019e0410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 10:48:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:48:41.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-p4tsw" for this suite. Jan 11 10:49:58.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:49:59.535: INFO: namespace: e2e-tests-deployment-p4tsw, resource: bindings, ignored listing per whitelist Jan 11 10:49:59.636: INFO: namespace e2e-tests-deployment-p4tsw deletion completed in 1m16.143431227s • [SLOW TEST:133.351 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:49:59.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-1de97aa1-3460-11ea-b0bd-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-1de97afe-3460-11ea-b0bd-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1de97aa1-3460-11ea-b0bd-0242ac110005 STEP: Updating configmap cm-test-opt-upd-1de97afe-3460-11ea-b0bd-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-1de97b21-3460-11ea-b0bd-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:50:42.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hxfcg" for this suite. Jan 11 10:51:08.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:51:09.087: INFO: namespace: e2e-tests-projected-hxfcg, resource: bindings, ignored listing per whitelist Jan 11 10:51:09.110: INFO: namespace e2e-tests-projected-hxfcg deletion completed in 26.220416946s • [SLOW TEST:69.473 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:51:09.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 11 10:51:09.372: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:51:10.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-4dz7w" for this suite. Jan 11 10:51:16.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:51:16.718: INFO: namespace: e2e-tests-custom-resource-definition-4dz7w, resource: bindings, ignored listing per whitelist Jan 11 10:51:16.736: INFO: namespace e2e-tests-custom-resource-definition-4dz7w deletion completed in 6.179974348s • [SLOW TEST:7.626 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:51:16.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jan 11 10:51:16.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-c5rvp run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 11 10:51:31.083: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0111 10:51:29.421205 43 log.go:172] (0xc0001380b0) (0xc0005554a0) Create stream\nI0111 10:51:29.421524 43 log.go:172] (0xc0001380b0) (0xc0005554a0) Stream added, broadcasting: 1\nI0111 10:51:29.441390 43 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0111 10:51:29.441467 43 log.go:172] (0xc0001380b0) (0xc0007bbcc0) Create stream\nI0111 10:51:29.441508 43 log.go:172] (0xc0001380b0) (0xc0007bbcc0) Stream added, broadcasting: 3\nI0111 10:51:29.443180 43 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0111 10:51:29.443235 43 log.go:172] (0xc0001380b0) (0xc000554fa0) Create stream\nI0111 10:51:29.443253 43 log.go:172] (0xc0001380b0) (0xc000554fa0) Stream added, broadcasting: 5\nI0111 10:51:29.444909 43 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0111 10:51:29.444971 43 log.go:172] (0xc0001380b0) (0xc0007bbd60) Create stream\nI0111 10:51:29.444993 43 log.go:172] (0xc0001380b0) (0xc0007bbd60) Stream added, broadcasting: 7\nI0111 10:51:29.449777 43 log.go:172] (0xc0001380b0) Reply frame received for 7\nI0111 10:51:29.450255 43 log.go:172] (0xc0007bbcc0) (3) Writing data frame\nI0111 10:51:29.450608 43 log.go:172] (0xc0007bbcc0) (3) Writing data frame\nI0111 10:51:29.462429 43 log.go:172] (0xc0001380b0) Data frame received for 5\nI0111 10:51:29.462467 43 log.go:172] (0xc000554fa0) (5) Data frame handling\nI0111 10:51:29.462505 43 log.go:172] (0xc000554fa0) (5) Data frame sent\nI0111 10:51:29.466864 43 log.go:172] (0xc0001380b0) Data frame received for 5\nI0111 10:51:29.466890 43 log.go:172] (0xc000554fa0) (5) Data frame handling\nI0111 10:51:29.466917 43 log.go:172] (0xc000554fa0) (5) Data frame sent\nI0111 10:51:31.023582 43 log.go:172] (0xc0001380b0) Data frame received for 1\nI0111 10:51:31.023837 43 log.go:172] (0xc0001380b0) (0xc0007bbd60) Stream removed, broadcasting: 7\nI0111 10:51:31.023922 43 log.go:172] (0xc0005554a0) (1) Data frame handling\nI0111 10:51:31.023951 43 log.go:172] (0xc0001380b0) (0xc0007bbcc0) Stream removed, broadcasting: 3\nI0111 10:51:31.024033 43 log.go:172] (0xc0005554a0) (1) Data frame sent\nI0111 10:51:31.024192 43 log.go:172] (0xc0001380b0) (0xc000554fa0) Stream removed, broadcasting: 5\nI0111 10:51:31.024320 43 log.go:172] (0xc0001380b0) (0xc0005554a0) Stream removed, broadcasting: 1\nI0111 10:51:31.024359 43 log.go:172] (0xc0001380b0) Go away received\nI0111 10:51:31.025099 43 log.go:172] (0xc0001380b0) (0xc0005554a0) Stream removed, broadcasting: 1\nI0111 10:51:31.025125 43 log.go:172] (0xc0001380b0) (0xc0007bbcc0) Stream removed, broadcasting: 3\nI0111 10:51:31.025135 43 log.go:172] (0xc0001380b0) (0xc000554fa0) Stream removed, broadcasting: 5\nI0111 10:51:31.025144 43 log.go:172] (0xc0001380b0) (0xc0007bbd60) Stream removed, broadcasting: 7\n" Jan 11 10:51:31.083: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:51:33.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-c5rvp" for this suite. Jan 11 10:51:39.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:51:39.994: INFO: namespace: e2e-tests-kubectl-c5rvp, resource: bindings, ignored listing per whitelist Jan 11 10:51:40.109: INFO: namespace e2e-tests-kubectl-c5rvp deletion completed in 6.649796329s • [SLOW TEST:23.373 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:51:40.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-59110d2c-3460-11ea-b0bd-0242ac110005 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:51:54.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rvnsw" for this suite. Jan 11 10:52:18.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:52:18.943: INFO: namespace: e2e-tests-configmap-rvnsw, resource: bindings, ignored listing per whitelist Jan 11 10:52:18.959: INFO: namespace e2e-tests-configmap-rvnsw deletion completed in 24.230627825s • [SLOW TEST:38.850 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:52:18.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 11 10:52:19.178: INFO: Waiting up to 5m0s for pod "pod-7029405d-3460-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-zwgcr" to be "success or failure" Jan 11 10:52:19.252: INFO: Pod "pod-7029405d-3460-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 72.993817ms Jan 11 10:52:21.348: INFO: Pod "pod-7029405d-3460-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169055975s Jan 11 10:52:23.387: INFO: Pod "pod-7029405d-3460-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208262103s Jan 11 10:52:25.841: INFO: Pod "pod-7029405d-3460-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.661947706s Jan 11 10:52:27.866: INFO: Pod "pod-7029405d-3460-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.687735215s Jan 11 10:52:29.883: INFO: Pod "pod-7029405d-3460-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.704595766s STEP: Saw pod success Jan 11 10:52:29.883: INFO: Pod "pod-7029405d-3460-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 10:52:29.891: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7029405d-3460-11ea-b0bd-0242ac110005 container test-container: STEP: delete the pod Jan 11 10:52:30.040: INFO: Waiting for pod pod-7029405d-3460-11ea-b0bd-0242ac110005 to disappear Jan 11 10:52:30.150: INFO: Pod pod-7029405d-3460-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:52:30.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zwgcr" for this suite. Jan 11 10:52:36.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:52:36.565: INFO: namespace: e2e-tests-emptydir-zwgcr, resource: bindings, ignored listing per whitelist Jan 11 10:52:36.633: INFO: namespace e2e-tests-emptydir-zwgcr deletion completed in 6.463210821s • [SLOW TEST:17.674 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:52:36.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-v62bt STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 10:52:36.811: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 11 10:53:11.183: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-v62bt PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 10:53:11.183: INFO: >>> kubeConfig: /root/.kube/config I0111 10:53:11.290091 9 log.go:172] (0xc00094b8c0) (0xc001aaa1e0) Create stream I0111 10:53:11.290211 9 log.go:172] (0xc00094b8c0) (0xc001aaa1e0) Stream added, broadcasting: 1 I0111 10:53:11.301958 9 log.go:172] (0xc00094b8c0) Reply frame received for 1 I0111 10:53:11.302022 9 log.go:172] (0xc00094b8c0) (0xc000f4ef00) Create stream I0111 10:53:11.302059 9 log.go:172] (0xc00094b8c0) (0xc000f4ef00) Stream added, broadcasting: 3 I0111 10:53:11.304429 9 log.go:172] (0xc00094b8c0) Reply frame received for 3 I0111 10:53:11.304567 9 log.go:172] (0xc00094b8c0) (0xc001aaa280) Create stream I0111 10:53:11.304627 9 log.go:172] (0xc00094b8c0) (0xc001aaa280) Stream added, broadcasting: 5 I0111 10:53:11.306984 9 log.go:172] (0xc00094b8c0) Reply frame received for 5 I0111 10:53:11.581003 9 log.go:172] (0xc00094b8c0) Data frame received for 3 I0111 10:53:11.581047 9 log.go:172] (0xc000f4ef00) (3) Data frame handling I0111 10:53:11.581061 9 log.go:172] (0xc000f4ef00) (3) Data frame sent I0111 10:53:11.725931 9 log.go:172] (0xc00094b8c0) (0xc000f4ef00) Stream removed, broadcasting: 3 I0111 10:53:11.726139 9 log.go:172] (0xc00094b8c0) Data frame received for 1 I0111 10:53:11.726166 9 log.go:172] (0xc001aaa1e0) (1) Data frame handling I0111 10:53:11.726185 9 log.go:172] (0xc001aaa1e0) (1) Data frame sent I0111 10:53:11.726198 9 log.go:172] (0xc00094b8c0) (0xc001aaa1e0) Stream removed, broadcasting: 1 I0111 10:53:11.726304 9 log.go:172] (0xc00094b8c0) (0xc001aaa280) Stream removed, broadcasting: 5 I0111 10:53:11.726373 9 log.go:172] (0xc00094b8c0) (0xc001aaa1e0) Stream removed, broadcasting: 1 I0111 10:53:11.726398 9 log.go:172] (0xc00094b8c0) (0xc000f4ef00) Stream removed, broadcasting: 3 I0111 10:53:11.726498 9 log.go:172] (0xc00094b8c0) (0xc001aaa280) Stream removed, broadcasting: 5 I0111 10:53:11.726633 9 log.go:172] (0xc00094b8c0) Go away received Jan 11 10:53:11.727: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:53:11.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-v62bt" for this suite. Jan 11 10:53:35.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:53:35.985: INFO: namespace: e2e-tests-pod-network-test-v62bt, resource: bindings, ignored listing per whitelist Jan 11 10:53:35.995: INFO: namespace e2e-tests-pod-network-test-v62bt deletion completed in 24.227597017s • [SLOW TEST:59.362 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:53:35.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-9e19b918-3460-11ea-b0bd-0242ac110005 STEP: Creating a pod to test consume secrets Jan 11 10:53:36.244: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9e1b1c39-3460-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-m7c59" to be "success or failure" Jan 11 10:53:36.252: INFO: Pod "pod-projected-secrets-9e1b1c39-3460-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.647137ms Jan 11 10:53:38.660: INFO: Pod "pod-projected-secrets-9e1b1c39-3460-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.416396404s Jan 11 10:53:40.681: INFO: Pod "pod-projected-secrets-9e1b1c39-3460-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437234536s Jan 11 10:53:42.692: INFO: Pod "pod-projected-secrets-9e1b1c39-3460-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448392908s Jan 11 10:53:44.779: INFO: Pod "pod-projected-secrets-9e1b1c39-3460-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535323578s Jan 11 10:53:46.792: INFO: Pod "pod-projected-secrets-9e1b1c39-3460-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.547932286s STEP: Saw pod success Jan 11 10:53:46.792: INFO: Pod "pod-projected-secrets-9e1b1c39-3460-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 10:53:46.797: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-9e1b1c39-3460-11ea-b0bd-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 11 10:53:47.964: INFO: Waiting for pod pod-projected-secrets-9e1b1c39-3460-11ea-b0bd-0242ac110005 to disappear Jan 11 10:53:47.979: INFO: Pod pod-projected-secrets-9e1b1c39-3460-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:53:47.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m7c59" for this suite. Jan 11 10:53:54.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:53:54.092: INFO: namespace: e2e-tests-projected-m7c59, resource: bindings, ignored listing per whitelist Jan 11 10:53:54.233: INFO: namespace e2e-tests-projected-m7c59 deletion completed in 6.229939754s • [SLOW TEST:18.238 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:53:54.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-fss8 STEP: Creating a pod to test atomic-volume-subpath Jan 11 10:53:54.667: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fss8" in namespace "e2e-tests-subpath-m5d8s" to be "success or failure" Jan 11 10:53:54.859: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Pending", Reason="", readiness=false. Elapsed: 191.294484ms Jan 11 10:53:56.899: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231488488s Jan 11 10:53:58.919: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251966862s Jan 11 10:54:00.945: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277881688s Jan 11 10:54:02.961: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.293137056s Jan 11 10:54:05.194: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.526176208s Jan 11 10:54:07.225: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.55711283s Jan 11 10:54:09.234: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.566495393s Jan 11 10:54:11.252: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Running", Reason="", readiness=false. Elapsed: 16.584862015s Jan 11 10:54:13.271: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Running", Reason="", readiness=false. Elapsed: 18.603828405s Jan 11 10:54:15.290: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Running", Reason="", readiness=false. Elapsed: 20.622366533s Jan 11 10:54:17.315: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Running", Reason="", readiness=false. Elapsed: 22.647688362s Jan 11 10:54:19.337: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Running", Reason="", readiness=false. Elapsed: 24.670103876s Jan 11 10:54:21.357: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Running", Reason="", readiness=false. Elapsed: 26.689619294s Jan 11 10:54:23.374: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Running", Reason="", readiness=false. Elapsed: 28.706923607s Jan 11 10:54:25.391: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Running", Reason="", readiness=false. Elapsed: 30.7238275s Jan 11 10:54:27.770: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Running", Reason="", readiness=false. Elapsed: 33.102427037s Jan 11 10:54:29.786: INFO: Pod "pod-subpath-test-configmap-fss8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.118893734s STEP: Saw pod success Jan 11 10:54:29.786: INFO: Pod "pod-subpath-test-configmap-fss8" satisfied condition "success or failure" Jan 11 10:54:29.793: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-fss8 container test-container-subpath-configmap-fss8: STEP: delete the pod Jan 11 10:54:30.526: INFO: Waiting for pod pod-subpath-test-configmap-fss8 to disappear Jan 11 10:54:30.779: INFO: Pod pod-subpath-test-configmap-fss8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-fss8 Jan 11 10:54:30.780: INFO: Deleting pod "pod-subpath-test-configmap-fss8" in namespace "e2e-tests-subpath-m5d8s" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:54:30.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-m5d8s" for this suite. Jan 11 10:54:36.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:54:37.046: INFO: namespace: e2e-tests-subpath-m5d8s, resource: bindings, ignored listing per whitelist Jan 11 10:54:37.244: INFO: namespace e2e-tests-subpath-m5d8s deletion completed in 6.443206599s • [SLOW TEST:43.011 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:54:37.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-c29ec343-3460-11ea-b0bd-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-c29ec343-3460-11ea-b0bd-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:55:51.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t7jm4" for this suite. Jan 11 10:56:15.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:56:15.499: INFO: namespace: e2e-tests-projected-t7jm4, resource: bindings, ignored listing per whitelist Jan 11 10:56:15.507: INFO: namespace e2e-tests-projected-t7jm4 deletion completed in 24.287531685s • [SLOW TEST:98.262 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:56:15.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-54v9p STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 11 10:56:15.693: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 11 10:56:52.240: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-54v9p PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 11 10:56:52.240: INFO: >>> kubeConfig: /root/.kube/config I0111 10:56:52.454997 9 log.go:172] (0xc00094bad0) (0xc001649900) Create stream I0111 10:56:52.455199 9 log.go:172] (0xc00094bad0) (0xc001649900) Stream added, broadcasting: 1 I0111 10:56:52.469033 9 log.go:172] (0xc00094bad0) Reply frame received for 1 I0111 10:56:52.469080 9 log.go:172] (0xc00094bad0) (0xc00128b540) Create stream I0111 10:56:52.469092 9 log.go:172] (0xc00094bad0) (0xc00128b540) Stream added, broadcasting: 3 I0111 10:56:52.471760 9 log.go:172] (0xc00094bad0) Reply frame received for 3 I0111 10:56:52.471819 9 log.go:172] (0xc00094bad0) (0xc00075d720) Create stream I0111 10:56:52.471850 9 log.go:172] (0xc00094bad0) (0xc00075d720) Stream added, broadcasting: 5 I0111 10:56:52.476001 9 log.go:172] (0xc00094bad0) Reply frame received for 5 I0111 10:56:54.263621 9 log.go:172] (0xc00094bad0) Data frame received for 3 I0111 10:56:54.263707 9 log.go:172] (0xc00128b540) (3) Data frame handling I0111 10:56:54.263740 9 log.go:172] (0xc00128b540) (3) Data frame sent I0111 10:56:54.465458 9 log.go:172] (0xc00094bad0) (0xc00128b540) Stream removed, broadcasting: 3 I0111 10:56:54.465606 9 log.go:172] (0xc00094bad0) Data frame received for 1 I0111 10:56:54.465625 9 log.go:172] (0xc001649900) (1) Data frame handling I0111 10:56:54.465639 9 log.go:172] (0xc001649900) (1) Data frame sent I0111 10:56:54.465692 9 log.go:172] (0xc00094bad0) (0xc001649900) Stream removed, broadcasting: 1 I0111 10:56:54.465817 9 log.go:172] (0xc00094bad0) (0xc00075d720) Stream removed, broadcasting: 5 I0111 10:56:54.465856 9 log.go:172] (0xc00094bad0) (0xc001649900) Stream removed, broadcasting: 1 I0111 10:56:54.465869 9 log.go:172] (0xc00094bad0) (0xc00128b540) Stream removed, broadcasting: 3 I0111 10:56:54.465883 9 log.go:172] (0xc00094bad0) (0xc00075d720) Stream removed, broadcasting: 5 I0111 10:56:54.466075 9 log.go:172] (0xc00094bad0) Go away received Jan 11 10:56:54.467: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:56:54.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-54v9p" for this suite. Jan 11 10:57:22.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:57:22.787: INFO: namespace: e2e-tests-pod-network-test-54v9p, resource: bindings, ignored listing per whitelist Jan 11 10:57:22.866: INFO: namespace e2e-tests-pod-network-test-54v9p deletion completed in 28.323039674s • [SLOW TEST:67.360 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:57:22.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 11 10:57:23.187: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:57:41.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-sgq8f" for this suite. Jan 11 10:57:49.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:57:49.520: INFO: namespace: e2e-tests-init-container-sgq8f, resource: bindings, ignored listing per whitelist Jan 11 10:57:49.568: INFO: namespace e2e-tests-init-container-sgq8f deletion completed in 8.189473028s • [SLOW TEST:26.701 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:57:49.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 11 10:57:49.817: INFO: Waiting up to 5m0s for pod "pod-353e31ca-3461-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-j26rr" to be "success or failure" Jan 11 10:57:49.845: INFO: Pod "pod-353e31ca-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.426213ms Jan 11 10:57:51.860: INFO: Pod "pod-353e31ca-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043501963s Jan 11 10:57:53.892: INFO: Pod "pod-353e31ca-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075042193s Jan 11 10:57:56.504: INFO: Pod "pod-353e31ca-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.687458333s Jan 11 10:57:58.516: INFO: Pod "pod-353e31ca-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.699012505s Jan 11 10:58:00.545: INFO: Pod "pod-353e31ca-3461-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.728222306s STEP: Saw pod success Jan 11 10:58:00.545: INFO: Pod "pod-353e31ca-3461-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 10:58:00.559: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-353e31ca-3461-11ea-b0bd-0242ac110005 container test-container: STEP: delete the pod Jan 11 10:58:00.700: INFO: Waiting for pod pod-353e31ca-3461-11ea-b0bd-0242ac110005 to disappear Jan 11 10:58:00.706: INFO: Pod pod-353e31ca-3461-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:58:00.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-j26rr" for this suite. Jan 11 10:58:06.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:58:06.845: INFO: namespace: e2e-tests-emptydir-j26rr, resource: bindings, ignored listing per whitelist Jan 11 10:58:06.960: INFO: namespace e2e-tests-emptydir-j26rr deletion completed in 6.249607581s • [SLOW TEST:17.392 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:58:06.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-3fa4cc0a-3461-11ea-b0bd-0242ac110005 STEP: Creating a pod to test consume secrets Jan 11 10:58:07.261: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3fa57e2e-3461-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-jkb82" to be "success or failure" Jan 11 10:58:07.297: INFO: Pod "pod-projected-secrets-3fa57e2e-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.016939ms Jan 11 10:58:09.393: INFO: Pod "pod-projected-secrets-3fa57e2e-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13246161s Jan 11 10:58:11.425: INFO: Pod "pod-projected-secrets-3fa57e2e-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163692184s Jan 11 10:58:13.443: INFO: Pod "pod-projected-secrets-3fa57e2e-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182371103s Jan 11 10:58:16.117: INFO: Pod "pod-projected-secrets-3fa57e2e-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.856166535s Jan 11 10:58:18.135: INFO: Pod "pod-projected-secrets-3fa57e2e-3461-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.874370338s STEP: Saw pod success Jan 11 10:58:18.135: INFO: Pod "pod-projected-secrets-3fa57e2e-3461-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 10:58:18.144: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-3fa57e2e-3461-11ea-b0bd-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 11 10:58:18.859: INFO: Waiting for pod pod-projected-secrets-3fa57e2e-3461-11ea-b0bd-0242ac110005 to disappear Jan 11 10:58:18.874: INFO: Pod pod-projected-secrets-3fa57e2e-3461-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 10:58:18.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jkb82" for this suite. Jan 11 10:58:24.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 10:58:25.043: INFO: namespace: e2e-tests-projected-jkb82, resource: bindings, ignored listing per whitelist Jan 11 10:58:25.055: INFO: namespace e2e-tests-projected-jkb82 deletion completed in 6.171977906s • [SLOW TEST:18.095 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 10:58:25.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-m5ckp Jan 11 10:58:35.627: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-m5ckp STEP: checking the pod's current state and verifying that restartCount is present Jan 11 10:58:35.633: INFO: Initial restart count of pod liveness-http is 0 Jan 11 10:58:53.904: INFO: Restart count of pod e2e-tests-container-probe-m5ckp/liveness-http is now 1 (18.270996311s elapsed) Jan 11 10:59:14.071: INFO: Restart count of pod e2e-tests-container-probe-m5ckp/liveness-http is now 2 (38.438002541s elapsed) Jan 11 10:59:34.754: INFO: Restart count of pod e2e-tests-container-probe-m5ckp/liveness-http is now 3 (59.120561466s elapsed) Jan 11 10:59:52.971: INFO: Restart count of pod e2e-tests-container-probe-m5ckp/liveness-http is now 4 (1m17.337487556s elapsed) Jan 11 11:01:06.002: INFO: Restart count of pod e2e-tests-container-probe-m5ckp/liveness-http is now 5 (2m30.369317918s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:01:06.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-m5ckp" for this suite. Jan 11 11:01:12.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:01:12.581: INFO: namespace: e2e-tests-container-probe-m5ckp, resource: bindings, ignored listing per whitelist Jan 11 11:01:12.592: INFO: namespace e2e-tests-container-probe-m5ckp deletion completed in 6.345655826s • [SLOW TEST:167.536 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:01:12.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ae3908fc-3461-11ea-b0bd-0242ac110005 STEP: Creating a pod to test consume secrets Jan 11 11:01:12.929: INFO: Waiting up to 5m0s for pod "pod-secrets-ae4fd2d0-3461-11ea-b0bd-0242ac110005" in namespace "e2e-tests-secrets-gp9n6" to be "success or failure" Jan 11 11:01:12.953: INFO: Pod "pod-secrets-ae4fd2d0-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.086279ms Jan 11 11:01:14.976: INFO: Pod "pod-secrets-ae4fd2d0-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047670377s Jan 11 11:01:16.988: INFO: Pod "pod-secrets-ae4fd2d0-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05913803s Jan 11 11:01:19.602: INFO: Pod "pod-secrets-ae4fd2d0-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.672898986s Jan 11 11:01:21.655: INFO: Pod "pod-secrets-ae4fd2d0-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726209148s Jan 11 11:01:23.720: INFO: Pod "pod-secrets-ae4fd2d0-3461-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.791233828s STEP: Saw pod success Jan 11 11:01:23.720: INFO: Pod "pod-secrets-ae4fd2d0-3461-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:01:23.733: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ae4fd2d0-3461-11ea-b0bd-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 11 11:01:23.989: INFO: Waiting for pod pod-secrets-ae4fd2d0-3461-11ea-b0bd-0242ac110005 to disappear Jan 11 11:01:24.258: INFO: Pod pod-secrets-ae4fd2d0-3461-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:01:24.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gp9n6" for this suite. Jan 11 11:01:30.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:01:30.674: INFO: namespace: e2e-tests-secrets-gp9n6, resource: bindings, ignored listing per whitelist Jan 11 11:01:30.729: INFO: namespace e2e-tests-secrets-gp9n6 deletion completed in 6.460211274s STEP: Destroying namespace "e2e-tests-secret-namespace-txkd4" for this suite. Jan 11 11:01:36.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:01:36.934: INFO: namespace: e2e-tests-secret-namespace-txkd4, resource: bindings, ignored listing per whitelist Jan 11 11:01:36.989: INFO: namespace e2e-tests-secret-namespace-txkd4 deletion completed in 6.260243755s • [SLOW TEST:24.397 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:01:36.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 11 11:01:47.375: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-bccf9b4f-3461-11ea-b0bd-0242ac110005,GenerateName:,Namespace:e2e-tests-events-vpwns,SelfLink:/api/v1/namespaces/e2e-tests-events-vpwns/pods/send-events-bccf9b4f-3461-11ea-b0bd-0242ac110005,UID:bcd1201d-3461-11ea-a994-fa163e34d433,ResourceVersion:17913040,Generation:0,CreationTimestamp:2020-01-11 11:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 241587536,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ct2dq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ct2dq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-ct2dq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e080a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e080c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 11:01:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 11:01:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 11:01:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 11:01:37 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-11 11:01:37 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-11 11:01:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://84a167da04d02ecbf60317cd2e9d90e26b1daa5da43ad1a6b0b83dac3cc014a3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jan 11 11:01:49.391: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 11 11:01:51.415: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:01:51.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-vpwns" for this suite. Jan 11 11:02:31.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:02:31.637: INFO: namespace: e2e-tests-events-vpwns, resource: bindings, ignored listing per whitelist Jan 11 11:02:31.649: INFO: namespace e2e-tests-events-vpwns deletion completed in 40.183366391s • [SLOW TEST:54.660 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:02:31.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 11 11:02:32.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-dv85s' Jan 11 11:02:34.650: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 11 11:02:34.650: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jan 11 11:02:34.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-dv85s' Jan 11 11:02:34.934: INFO: stderr: "" Jan 11 11:02:34.934: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:02:34.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dv85s" for this suite. Jan 11 11:02:57.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:02:57.037: INFO: namespace: e2e-tests-kubectl-dv85s, resource: bindings, ignored listing per whitelist Jan 11 11:02:57.092: INFO: namespace e2e-tests-kubectl-dv85s deletion completed in 22.145398374s • [SLOW TEST:25.443 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:02:57.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 11 11:02:57.323: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec8822e9-3461-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-4w8vg" to be "success or failure" Jan 11 11:02:57.363: INFO: Pod "downwardapi-volume-ec8822e9-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.296347ms Jan 11 11:02:59.377: INFO: Pod "downwardapi-volume-ec8822e9-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053620914s Jan 11 11:03:01.404: INFO: Pod "downwardapi-volume-ec8822e9-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080449584s Jan 11 11:03:04.236: INFO: Pod "downwardapi-volume-ec8822e9-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.912579525s Jan 11 11:03:06.261: INFO: Pod "downwardapi-volume-ec8822e9-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.938083006s Jan 11 11:03:08.275: INFO: Pod "downwardapi-volume-ec8822e9-3461-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.952204872s STEP: Saw pod success Jan 11 11:03:08.275: INFO: Pod "downwardapi-volume-ec8822e9-3461-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:03:08.290: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ec8822e9-3461-11ea-b0bd-0242ac110005 container client-container: STEP: delete the pod Jan 11 11:03:08.413: INFO: Waiting for pod downwardapi-volume-ec8822e9-3461-11ea-b0bd-0242ac110005 to disappear Jan 11 11:03:08.447: INFO: Pod downwardapi-volume-ec8822e9-3461-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:03:08.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4w8vg" for this suite. Jan 11 11:03:14.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:03:14.712: INFO: namespace: e2e-tests-downward-api-4w8vg, resource: bindings, ignored listing per whitelist Jan 11 11:03:14.772: INFO: namespace e2e-tests-downward-api-4w8vg deletion completed in 6.260403225s • [SLOW TEST:17.681 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:03:14.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 11 11:03:23.243: INFO: Waiting up to 5m0s for pod "client-envvars-fbf5b335-3461-11ea-b0bd-0242ac110005" in namespace "e2e-tests-pods-rkfns" to be "success or failure" Jan 11 11:03:23.275: INFO: Pod "client-envvars-fbf5b335-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.47435ms Jan 11 11:03:25.298: INFO: Pod "client-envvars-fbf5b335-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055236092s Jan 11 11:03:27.310: INFO: Pod "client-envvars-fbf5b335-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067564177s Jan 11 11:03:29.339: INFO: Pod "client-envvars-fbf5b335-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096197244s Jan 11 11:03:31.778: INFO: Pod "client-envvars-fbf5b335-3461-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.534607564s Jan 11 11:03:33.887: INFO: Pod "client-envvars-fbf5b335-3461-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.64435471s STEP: Saw pod success Jan 11 11:03:33.887: INFO: Pod "client-envvars-fbf5b335-3461-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:03:33.897: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-fbf5b335-3461-11ea-b0bd-0242ac110005 container env3cont: STEP: delete the pod Jan 11 11:03:34.104: INFO: Waiting for pod client-envvars-fbf5b335-3461-11ea-b0bd-0242ac110005 to disappear Jan 11 11:03:34.131: INFO: Pod client-envvars-fbf5b335-3461-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:03:34.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rkfns" for this suite. Jan 11 11:04:16.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:04:16.362: INFO: namespace: e2e-tests-pods-rkfns, resource: bindings, ignored listing per whitelist Jan 11 11:04:16.494: INFO: namespace e2e-tests-pods-rkfns deletion completed in 42.353367404s • [SLOW TEST:61.722 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:04:16.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-7pgk STEP: Creating a pod to test atomic-volume-subpath Jan 11 11:04:16.748: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7pgk" in namespace "e2e-tests-subpath-vxztc" to be "success or failure" Jan 11 11:04:16.754: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Pending", Reason="", readiness=false. Elapsed: 5.936669ms Jan 11 11:04:18.765: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017316909s Jan 11 11:04:20.789: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040976682s Jan 11 11:04:22.864: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116165778s Jan 11 11:04:24.898: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.149888093s Jan 11 11:04:26.909: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.161253556s Jan 11 11:04:28.924: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.176463286s Jan 11 11:04:30.969: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.220816531s Jan 11 11:04:32.978: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Running", Reason="", readiness=false. Elapsed: 16.230276512s Jan 11 11:04:34.991: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Running", Reason="", readiness=false. Elapsed: 18.243245957s Jan 11 11:04:37.026: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Running", Reason="", readiness=false. Elapsed: 20.278232942s Jan 11 11:04:39.042: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Running", Reason="", readiness=false. Elapsed: 22.294394308s Jan 11 11:04:41.059: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Running", Reason="", readiness=false. Elapsed: 24.311255938s Jan 11 11:04:43.072: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Running", Reason="", readiness=false. Elapsed: 26.323760509s Jan 11 11:04:45.084: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Running", Reason="", readiness=false. Elapsed: 28.336029566s Jan 11 11:04:47.209: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Running", Reason="", readiness=false. Elapsed: 30.460894359s Jan 11 11:04:49.559: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Running", Reason="", readiness=false. Elapsed: 32.810748424s Jan 11 11:04:51.569: INFO: Pod "pod-subpath-test-projected-7pgk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.821434213s STEP: Saw pod success Jan 11 11:04:51.569: INFO: Pod "pod-subpath-test-projected-7pgk" satisfied condition "success or failure" Jan 11 11:04:51.575: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-7pgk container test-container-subpath-projected-7pgk: STEP: delete the pod Jan 11 11:04:51.652: INFO: Waiting for pod pod-subpath-test-projected-7pgk to disappear Jan 11 11:04:51.659: INFO: Pod pod-subpath-test-projected-7pgk no longer exists STEP: Deleting pod pod-subpath-test-projected-7pgk Jan 11 11:04:51.659: INFO: Deleting pod "pod-subpath-test-projected-7pgk" in namespace "e2e-tests-subpath-vxztc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:04:51.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-vxztc" for this suite. Jan 11 11:04:59.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:04:59.918: INFO: namespace: e2e-tests-subpath-vxztc, resource: bindings, ignored listing per whitelist Jan 11 11:04:59.922: INFO: namespace e2e-tests-subpath-vxztc deletion completed in 8.196317981s • [SLOW TEST:43.428 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:04:59.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-89h89 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jan 11 11:05:00.186: INFO: Found 0 stateful pods, waiting for 3 Jan 11 11:05:10.200: INFO: Found 1 stateful pods, waiting for 3 Jan 11 11:05:20.417: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 11:05:20.417: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 11:05:20.417: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 11 11:05:30.194: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 11:05:30.194: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 11:05:30.194: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 11 11:05:30.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-89h89 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 11 11:05:31.252: INFO: stderr: "I0111 11:05:30.443711 118 log.go:172] (0xc0006e82c0) (0xc00070c640) Create stream\nI0111 11:05:30.443967 118 log.go:172] (0xc0006e82c0) (0xc00070c640) Stream added, broadcasting: 1\nI0111 11:05:30.450076 118 log.go:172] (0xc0006e82c0) Reply frame received for 1\nI0111 11:05:30.450115 118 log.go:172] (0xc0006e82c0) (0xc00064cc80) Create stream\nI0111 11:05:30.450126 118 log.go:172] (0xc0006e82c0) (0xc00064cc80) Stream added, broadcasting: 3\nI0111 11:05:30.451068 118 log.go:172] (0xc0006e82c0) Reply frame received for 3\nI0111 11:05:30.451106 118 log.go:172] (0xc0006e82c0) (0xc00070c6e0) Create stream\nI0111 11:05:30.451120 118 log.go:172] (0xc0006e82c0) (0xc00070c6e0) Stream added, broadcasting: 5\nI0111 11:05:30.451993 118 log.go:172] (0xc0006e82c0) Reply frame received for 5\nI0111 11:05:30.865999 118 log.go:172] (0xc0006e82c0) Data frame received for 3\nI0111 11:05:30.866057 118 log.go:172] (0xc00064cc80) (3) Data frame handling\nI0111 11:05:30.866084 118 log.go:172] (0xc00064cc80) (3) Data frame sent\nI0111 11:05:31.245294 118 log.go:172] (0xc0006e82c0) (0xc00064cc80) Stream removed, broadcasting: 3\nI0111 11:05:31.245417 118 log.go:172] (0xc0006e82c0) Data frame received for 1\nI0111 11:05:31.245487 118 log.go:172] (0xc00070c640) (1) Data frame handling\nI0111 11:05:31.245511 118 log.go:172] (0xc00070c640) (1) Data frame sent\nI0111 11:05:31.245537 118 log.go:172] (0xc0006e82c0) (0xc00070c6e0) Stream removed, broadcasting: 5\nI0111 11:05:31.245668 118 log.go:172] (0xc0006e82c0) (0xc00070c640) Stream removed, broadcasting: 1\nI0111 11:05:31.245717 118 log.go:172] (0xc0006e82c0) Go away received\nI0111 11:05:31.245933 118 log.go:172] (0xc0006e82c0) (0xc00070c640) Stream removed, broadcasting: 1\nI0111 11:05:31.245954 118 log.go:172] (0xc0006e82c0) (0xc00064cc80) Stream removed, broadcasting: 3\nI0111 11:05:31.245971 118 log.go:172] (0xc0006e82c0) (0xc00070c6e0) Stream removed, broadcasting: 5\n" Jan 11 11:05:31.252: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 11 11:05:31.252: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 11 11:05:31.319: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 11 11:05:41.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-89h89 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 11 11:05:42.086: INFO: stderr: "I0111 11:05:41.692467 140 log.go:172] (0xc00013a790) (0xc0006bf2c0) Create stream\nI0111 11:05:41.692695 140 log.go:172] (0xc00013a790) (0xc0006bf2c0) Stream added, broadcasting: 1\nI0111 11:05:41.701567 140 log.go:172] (0xc00013a790) Reply frame received for 1\nI0111 11:05:41.701632 140 log.go:172] (0xc00013a790) (0xc00071c000) Create stream\nI0111 11:05:41.701651 140 log.go:172] (0xc00013a790) (0xc00071c000) Stream added, broadcasting: 3\nI0111 11:05:41.703241 140 log.go:172] (0xc00013a790) Reply frame received for 3\nI0111 11:05:41.703299 140 log.go:172] (0xc00013a790) (0xc00039a000) Create stream\nI0111 11:05:41.703316 140 log.go:172] (0xc00013a790) (0xc00039a000) Stream added, broadcasting: 5\nI0111 11:05:41.705140 140 log.go:172] (0xc00013a790) Reply frame received for 5\nI0111 11:05:41.855829 140 log.go:172] (0xc00013a790) Data frame received for 3\nI0111 11:05:41.855954 140 log.go:172] (0xc00071c000) (3) Data frame handling\nI0111 11:05:41.855977 140 log.go:172] (0xc00071c000) (3) Data frame sent\nI0111 11:05:42.073884 140 log.go:172] (0xc00013a790) Data frame received for 1\nI0111 11:05:42.074042 140 log.go:172] (0xc00013a790) (0xc00071c000) Stream removed, broadcasting: 3\nI0111 11:05:42.074154 140 log.go:172] (0xc0006bf2c0) (1) Data frame handling\nI0111 11:05:42.074241 140 log.go:172] (0xc0006bf2c0) (1) Data frame sent\nI0111 11:05:42.074301 140 log.go:172] (0xc00013a790) (0xc00039a000) Stream removed, broadcasting: 5\nI0111 11:05:42.074415 140 log.go:172] (0xc00013a790) (0xc0006bf2c0) Stream removed, broadcasting: 1\nI0111 11:05:42.074451 140 log.go:172] (0xc00013a790) Go away received\nI0111 11:05:42.075420 140 log.go:172] (0xc00013a790) (0xc0006bf2c0) Stream removed, broadcasting: 1\nI0111 11:05:42.075488 140 log.go:172] (0xc00013a790) (0xc00071c000) Stream removed, broadcasting: 3\nI0111 11:05:42.075503 140 log.go:172] (0xc00013a790) (0xc00039a000) Stream removed, broadcasting: 5\n" Jan 11 11:05:42.086: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 11 11:05:42.086: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 11 11:05:42.168: INFO: Waiting for StatefulSet e2e-tests-statefulset-89h89/ss2 to complete update Jan 11 11:05:42.168: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 11 11:05:42.168: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 11 11:05:42.168: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 11 11:05:52.216: INFO: Waiting for StatefulSet e2e-tests-statefulset-89h89/ss2 to complete update Jan 11 11:05:52.216: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 11 11:05:52.216: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 11 11:05:52.216: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 11 11:06:02.339: INFO: Waiting for StatefulSet e2e-tests-statefulset-89h89/ss2 to complete update Jan 11 11:06:02.339: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 11 11:06:02.339: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 11 11:06:12.182: INFO: Waiting for StatefulSet e2e-tests-statefulset-89h89/ss2 to complete update Jan 11 11:06:12.182: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 11 11:06:22.306: INFO: Waiting for StatefulSet e2e-tests-statefulset-89h89/ss2 to complete update Jan 11 11:06:22.306: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 11 11:06:32.192: INFO: Waiting for StatefulSet e2e-tests-statefulset-89h89/ss2 to complete update Jan 11 11:06:32.192: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 11 11:06:42.202: INFO: Waiting for StatefulSet e2e-tests-statefulset-89h89/ss2 to complete update STEP: Rolling back to a previous revision Jan 11 11:06:52.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-89h89 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 11 11:06:52.870: INFO: stderr: "I0111 11:06:52.432420 162 log.go:172] (0xc00014c6e0) (0xc000595400) Create stream\nI0111 11:06:52.432624 162 log.go:172] (0xc00014c6e0) (0xc000595400) Stream added, broadcasting: 1\nI0111 11:06:52.440495 162 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0111 11:06:52.440530 162 log.go:172] (0xc00014c6e0) (0xc0005954a0) Create stream\nI0111 11:06:52.440539 162 log.go:172] (0xc00014c6e0) (0xc0005954a0) Stream added, broadcasting: 3\nI0111 11:06:52.442542 162 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0111 11:06:52.442601 162 log.go:172] (0xc00014c6e0) (0xc0006f4000) Create stream\nI0111 11:06:52.442619 162 log.go:172] (0xc00014c6e0) (0xc0006f4000) Stream added, broadcasting: 5\nI0111 11:06:52.444171 162 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0111 11:06:52.718985 162 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0111 11:06:52.719028 162 log.go:172] (0xc0005954a0) (3) Data frame handling\nI0111 11:06:52.719037 162 log.go:172] (0xc0005954a0) (3) Data frame sent\nI0111 11:06:52.861923 162 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0111 11:06:52.862142 162 log.go:172] (0xc00014c6e0) (0xc0005954a0) Stream removed, broadcasting: 3\nI0111 11:06:52.862211 162 log.go:172] (0xc000595400) (1) Data frame handling\nI0111 11:06:52.862254 162 log.go:172] (0xc000595400) (1) Data frame sent\nI0111 11:06:52.862340 162 log.go:172] (0xc00014c6e0) (0xc0006f4000) Stream removed, broadcasting: 5\nI0111 11:06:52.862462 162 log.go:172] (0xc00014c6e0) (0xc000595400) Stream removed, broadcasting: 1\nI0111 11:06:52.862627 162 log.go:172] (0xc00014c6e0) Go away received\nI0111 11:06:52.863223 162 log.go:172] (0xc00014c6e0) (0xc000595400) Stream removed, broadcasting: 1\nI0111 11:06:52.863254 162 log.go:172] (0xc00014c6e0) (0xc0005954a0) Stream removed, broadcasting: 3\nI0111 11:06:52.863282 162 log.go:172] (0xc00014c6e0) (0xc0006f4000) Stream removed, broadcasting: 5\n" Jan 11 11:06:52.870: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 11 11:06:52.870: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 11 11:07:02.929: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 11 11:07:12.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-89h89 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 11 11:07:13.539: INFO: stderr: "I0111 11:07:13.232378 184 log.go:172] (0xc00015c6e0) (0xc0005d5360) Create stream\nI0111 11:07:13.232549 184 log.go:172] (0xc00015c6e0) (0xc0005d5360) Stream added, broadcasting: 1\nI0111 11:07:13.239986 184 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0111 11:07:13.240063 184 log.go:172] (0xc00015c6e0) (0xc00057a000) Create stream\nI0111 11:07:13.240095 184 log.go:172] (0xc00015c6e0) (0xc00057a000) Stream added, broadcasting: 3\nI0111 11:07:13.241700 184 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0111 11:07:13.241730 184 log.go:172] (0xc00015c6e0) (0xc000530000) Create stream\nI0111 11:07:13.241739 184 log.go:172] (0xc00015c6e0) (0xc000530000) Stream added, broadcasting: 5\nI0111 11:07:13.242890 184 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0111 11:07:13.397070 184 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0111 11:07:13.397139 184 log.go:172] (0xc00057a000) (3) Data frame handling\nI0111 11:07:13.397157 184 log.go:172] (0xc00057a000) (3) Data frame sent\nI0111 11:07:13.532864 184 log.go:172] (0xc00015c6e0) (0xc00057a000) Stream removed, broadcasting: 3\nI0111 11:07:13.532996 184 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0111 11:07:13.533012 184 log.go:172] (0xc0005d5360) (1) Data frame handling\nI0111 11:07:13.533023 184 log.go:172] (0xc0005d5360) (1) Data frame sent\nI0111 11:07:13.533036 184 log.go:172] (0xc00015c6e0) (0xc0005d5360) Stream removed, broadcasting: 1\nI0111 11:07:13.533630 184 log.go:172] (0xc00015c6e0) (0xc000530000) Stream removed, broadcasting: 5\nI0111 11:07:13.533676 184 log.go:172] (0xc00015c6e0) (0xc0005d5360) Stream removed, broadcasting: 1\nI0111 11:07:13.533701 184 log.go:172] (0xc00015c6e0) (0xc00057a000) Stream removed, broadcasting: 3\nI0111 11:07:13.533721 184 log.go:172] (0xc00015c6e0) (0xc000530000) Stream removed, broadcasting: 5\n" Jan 11 11:07:13.539: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 11 11:07:13.539: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 11 11:07:23.671: INFO: Waiting for StatefulSet e2e-tests-statefulset-89h89/ss2 to complete update Jan 11 11:07:23.671: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 11 11:07:23.671: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 11 11:07:33.709: INFO: Waiting for StatefulSet e2e-tests-statefulset-89h89/ss2 to complete update Jan 11 11:07:33.709: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 11 11:07:33.709: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 11 11:07:43.699: INFO: Waiting for StatefulSet e2e-tests-statefulset-89h89/ss2 to complete update Jan 11 11:07:43.699: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 11 11:07:53.702: INFO: Waiting for StatefulSet e2e-tests-statefulset-89h89/ss2 to complete update Jan 11 11:07:53.702: INFO: Waiting for Pod e2e-tests-statefulset-89h89/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 11 11:08:03.696: INFO: Waiting for StatefulSet e2e-tests-statefulset-89h89/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 11 11:08:13.703: INFO: Deleting all statefulset in ns e2e-tests-statefulset-89h89 Jan 11 11:08:13.717: INFO: Scaling statefulset ss2 to 0 Jan 11 11:08:43.860: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 11:08:43.866: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:08:43.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-89h89" for this suite. Jan 11 11:08:52.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:08:52.125: INFO: namespace: e2e-tests-statefulset-89h89, resource: bindings, ignored listing per whitelist Jan 11 11:08:52.197: INFO: namespace e2e-tests-statefulset-89h89 deletion completed in 8.255009129s • [SLOW TEST:232.275 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:08:52.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 11 11:08:52.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:08:52.732: INFO: stderr: "" Jan 11 11:08:52.732: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 11:08:52.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:08:52.924: INFO: stderr: "" Jan 11 11:08:52.925: INFO: stdout: "update-demo-nautilus-9kxs2 update-demo-nautilus-tfv94 " Jan 11 11:08:52.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kxs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:08:53.184: INFO: stderr: "" Jan 11 11:08:53.184: INFO: stdout: "" Jan 11 11:08:53.184: INFO: update-demo-nautilus-9kxs2 is created but not running Jan 11 11:08:58.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:08:58.362: INFO: stderr: "" Jan 11 11:08:58.362: INFO: stdout: "update-demo-nautilus-9kxs2 update-demo-nautilus-tfv94 " Jan 11 11:08:58.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kxs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:08:58.448: INFO: stderr: "" Jan 11 11:08:58.448: INFO: stdout: "" Jan 11 11:08:58.448: INFO: update-demo-nautilus-9kxs2 is created but not running Jan 11 11:09:03.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:04.162: INFO: stderr: "" Jan 11 11:09:04.162: INFO: stdout: "update-demo-nautilus-9kxs2 update-demo-nautilus-tfv94 " Jan 11 11:09:04.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kxs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:04.458: INFO: stderr: "" Jan 11 11:09:04.458: INFO: stdout: "" Jan 11 11:09:04.458: INFO: update-demo-nautilus-9kxs2 is created but not running Jan 11 11:09:09.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:09.641: INFO: stderr: "" Jan 11 11:09:09.641: INFO: stdout: "update-demo-nautilus-9kxs2 update-demo-nautilus-tfv94 " Jan 11 11:09:09.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kxs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:09.757: INFO: stderr: "" Jan 11 11:09:09.757: INFO: stdout: "true" Jan 11 11:09:09.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kxs2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:09.924: INFO: stderr: "" Jan 11 11:09:09.924: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 11:09:09.924: INFO: validating pod update-demo-nautilus-9kxs2 Jan 11 11:09:09.984: INFO: got data: { "image": "nautilus.jpg" } Jan 11 11:09:09.984: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 11:09:09.984: INFO: update-demo-nautilus-9kxs2 is verified up and running Jan 11 11:09:09.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tfv94 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:10.075: INFO: stderr: "" Jan 11 11:09:10.075: INFO: stdout: "true" Jan 11 11:09:10.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tfv94 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:10.186: INFO: stderr: "" Jan 11 11:09:10.186: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 11:09:10.186: INFO: validating pod update-demo-nautilus-tfv94 Jan 11 11:09:10.194: INFO: got data: { "image": "nautilus.jpg" } Jan 11 11:09:10.194: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 11:09:10.194: INFO: update-demo-nautilus-tfv94 is verified up and running STEP: scaling down the replication controller Jan 11 11:09:10.196: INFO: scanned /root for discovery docs: Jan 11 11:09:10.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:11.437: INFO: stderr: "" Jan 11 11:09:11.437: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 11:09:11.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:11.618: INFO: stderr: "" Jan 11 11:09:11.618: INFO: stdout: "update-demo-nautilus-9kxs2 update-demo-nautilus-tfv94 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 11 11:09:16.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:16.909: INFO: stderr: "" Jan 11 11:09:16.909: INFO: stdout: "update-demo-nautilus-9kxs2 update-demo-nautilus-tfv94 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 11 11:09:21.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:22.008: INFO: stderr: "" Jan 11 11:09:22.008: INFO: stdout: "update-demo-nautilus-9kxs2 update-demo-nautilus-tfv94 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 11 11:09:27.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:27.166: INFO: stderr: "" Jan 11 11:09:27.166: INFO: stdout: "update-demo-nautilus-9kxs2 " Jan 11 11:09:27.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kxs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:27.297: INFO: stderr: "" Jan 11 11:09:27.297: INFO: stdout: "true" Jan 11 11:09:27.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kxs2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:27.443: INFO: stderr: "" Jan 11 11:09:27.443: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 11:09:27.443: INFO: validating pod update-demo-nautilus-9kxs2 Jan 11 11:09:27.453: INFO: got data: { "image": "nautilus.jpg" } Jan 11 11:09:27.453: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 11:09:27.453: INFO: update-demo-nautilus-9kxs2 is verified up and running STEP: scaling up the replication controller Jan 11 11:09:27.455: INFO: scanned /root for discovery docs: Jan 11 11:09:27.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:29.359: INFO: stderr: "" Jan 11 11:09:29.359: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 11 11:09:29.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:29.826: INFO: stderr: "" Jan 11 11:09:29.826: INFO: stdout: "update-demo-nautilus-8tnxb update-demo-nautilus-9kxs2 " Jan 11 11:09:29.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tnxb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:30.138: INFO: stderr: "" Jan 11 11:09:30.138: INFO: stdout: "" Jan 11 11:09:30.138: INFO: update-demo-nautilus-8tnxb is created but not running Jan 11 11:09:35.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:35.282: INFO: stderr: "" Jan 11 11:09:35.282: INFO: stdout: "update-demo-nautilus-8tnxb update-demo-nautilus-9kxs2 " Jan 11 11:09:35.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tnxb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:35.403: INFO: stderr: "" Jan 11 11:09:35.403: INFO: stdout: "" Jan 11 11:09:35.403: INFO: update-demo-nautilus-8tnxb is created but not running Jan 11 11:09:40.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:40.574: INFO: stderr: "" Jan 11 11:09:40.574: INFO: stdout: "update-demo-nautilus-8tnxb update-demo-nautilus-9kxs2 " Jan 11 11:09:40.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tnxb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:40.688: INFO: stderr: "" Jan 11 11:09:40.688: INFO: stdout: "true" Jan 11 11:09:40.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8tnxb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:40.809: INFO: stderr: "" Jan 11 11:09:40.809: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 11:09:40.809: INFO: validating pod update-demo-nautilus-8tnxb Jan 11 11:09:40.825: INFO: got data: { "image": "nautilus.jpg" } Jan 11 11:09:40.825: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 11:09:40.825: INFO: update-demo-nautilus-8tnxb is verified up and running Jan 11 11:09:40.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kxs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:40.954: INFO: stderr: "" Jan 11 11:09:40.954: INFO: stdout: "true" Jan 11 11:09:40.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kxs2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:41.113: INFO: stderr: "" Jan 11 11:09:41.113: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 11:09:41.113: INFO: validating pod update-demo-nautilus-9kxs2 Jan 11 11:09:41.125: INFO: got data: { "image": "nautilus.jpg" } Jan 11 11:09:41.125: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 11:09:41.125: INFO: update-demo-nautilus-9kxs2 is verified up and running STEP: using delete to clean up resources Jan 11 11:09:41.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:41.243: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 11:09:41.243: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 11 11:09:41.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-qx7cm' Jan 11 11:09:41.395: INFO: stderr: "No resources found.\n" Jan 11 11:09:41.395: INFO: stdout: "" Jan 11 11:09:41.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-qx7cm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 11:09:41.521: INFO: stderr: "" Jan 11 11:09:41.521: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:09:41.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qx7cm" for this suite. Jan 11 11:10:05.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:10:05.692: INFO: namespace: e2e-tests-kubectl-qx7cm, resource: bindings, ignored listing per whitelist Jan 11 11:10:05.761: INFO: namespace e2e-tests-kubectl-qx7cm deletion completed in 24.230453499s • [SLOW TEST:73.563 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:10:05.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-ec06e562-3462-11ea-b0bd-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 11 11:10:05.969: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec07db02-3462-11ea-b0bd-0242ac110005" in namespace "e2e-tests-configmap-2rpbb" to be "success or failure" Jan 11 11:10:05.975: INFO: Pod "pod-configmaps-ec07db02-3462-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15407ms Jan 11 11:10:08.007: INFO: Pod "pod-configmaps-ec07db02-3462-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03750601s Jan 11 11:10:10.125: INFO: Pod "pod-configmaps-ec07db02-3462-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156057353s Jan 11 11:10:12.510: INFO: Pod "pod-configmaps-ec07db02-3462-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.540858329s Jan 11 11:10:14.562: INFO: Pod "pod-configmaps-ec07db02-3462-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.59329808s Jan 11 11:10:16.592: INFO: Pod "pod-configmaps-ec07db02-3462-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.622714695s STEP: Saw pod success Jan 11 11:10:16.592: INFO: Pod "pod-configmaps-ec07db02-3462-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:10:16.599: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ec07db02-3462-11ea-b0bd-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 11 11:10:17.808: INFO: Waiting for pod pod-configmaps-ec07db02-3462-11ea-b0bd-0242ac110005 to disappear Jan 11 11:10:17.852: INFO: Pod pod-configmaps-ec07db02-3462-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:10:17.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2rpbb" for this suite. Jan 11 11:10:23.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:10:24.042: INFO: namespace: e2e-tests-configmap-2rpbb, resource: bindings, ignored listing per whitelist Jan 11 11:10:24.169: INFO: namespace e2e-tests-configmap-2rpbb deletion completed in 6.289078071s • [SLOW TEST:18.408 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:10:24.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-f70e3924-3462-11ea-b0bd-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 11 11:10:24.554: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f711831a-3462-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-cxfvl" to be "success or failure" Jan 11 11:10:24.576: INFO: Pod "pod-projected-configmaps-f711831a-3462-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.112076ms Jan 11 11:10:26.617: INFO: Pod "pod-projected-configmaps-f711831a-3462-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062160274s Jan 11 11:10:28.640: INFO: Pod "pod-projected-configmaps-f711831a-3462-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085615695s Jan 11 11:10:30.654: INFO: Pod "pod-projected-configmaps-f711831a-3462-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099744816s Jan 11 11:10:32.729: INFO: Pod "pod-projected-configmaps-f711831a-3462-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.174151574s STEP: Saw pod success Jan 11 11:10:32.729: INFO: Pod "pod-projected-configmaps-f711831a-3462-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:10:32.745: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f711831a-3462-11ea-b0bd-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 11 11:10:33.044: INFO: Waiting for pod pod-projected-configmaps-f711831a-3462-11ea-b0bd-0242ac110005 to disappear Jan 11 11:10:33.069: INFO: Pod pod-projected-configmaps-f711831a-3462-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:10:33.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cxfvl" for this suite. Jan 11 11:10:39.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:10:39.235: INFO: namespace: e2e-tests-projected-cxfvl, resource: bindings, ignored listing per whitelist Jan 11 11:10:39.308: INFO: namespace e2e-tests-projected-cxfvl deletion completed in 6.174549818s • [SLOW TEST:15.138 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:10:39.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 11 11:10:39.438: INFO: Creating ReplicaSet my-hostname-basic-fffc51aa-3462-11ea-b0bd-0242ac110005 Jan 11 11:10:39.530: INFO: Pod name my-hostname-basic-fffc51aa-3462-11ea-b0bd-0242ac110005: Found 0 pods out of 1 Jan 11 11:10:44.551: INFO: Pod name my-hostname-basic-fffc51aa-3462-11ea-b0bd-0242ac110005: Found 1 pods out of 1 Jan 11 11:10:44.551: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-fffc51aa-3462-11ea-b0bd-0242ac110005" is running Jan 11 11:10:48.592: INFO: Pod "my-hostname-basic-fffc51aa-3462-11ea-b0bd-0242ac110005-96kbq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 11:10:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 11:10:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fffc51aa-3462-11ea-b0bd-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 11:10:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fffc51aa-3462-11ea-b0bd-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 11:10:39 +0000 UTC Reason: Message:}]) Jan 11 11:10:48.592: INFO: Trying to dial the pod Jan 11 11:10:53.730: INFO: Controller my-hostname-basic-fffc51aa-3462-11ea-b0bd-0242ac110005: Got expected result from replica 1 [my-hostname-basic-fffc51aa-3462-11ea-b0bd-0242ac110005-96kbq]: "my-hostname-basic-fffc51aa-3462-11ea-b0bd-0242ac110005-96kbq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:10:53.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-k2lgd" for this suite. Jan 11 11:10:59.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:11:00.017: INFO: namespace: e2e-tests-replicaset-k2lgd, resource: bindings, ignored listing per whitelist Jan 11 11:11:01.103: INFO: namespace e2e-tests-replicaset-k2lgd deletion completed in 7.355100224s • [SLOW TEST:21.794 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:11:01.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 11 11:11:01.440: INFO: Waiting up to 5m0s for pod "pod-0d1476a9-3463-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-jsw2c" to be "success or failure" Jan 11 11:11:01.443: INFO: Pod "pod-0d1476a9-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.33154ms Jan 11 11:11:04.117: INFO: Pod "pod-0d1476a9-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.677290668s Jan 11 11:11:06.136: INFO: Pod "pod-0d1476a9-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.69585622s Jan 11 11:11:08.159: INFO: Pod "pod-0d1476a9-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.719070669s Jan 11 11:11:10.174: INFO: Pod "pod-0d1476a9-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.734799031s Jan 11 11:11:12.190: INFO: Pod "pod-0d1476a9-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.750145811s Jan 11 11:11:14.606: INFO: Pod "pod-0d1476a9-3463-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.166821801s STEP: Saw pod success Jan 11 11:11:14.607: INFO: Pod "pod-0d1476a9-3463-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:11:14.635: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0d1476a9-3463-11ea-b0bd-0242ac110005 container test-container: STEP: delete the pod Jan 11 11:11:15.147: INFO: Waiting for pod pod-0d1476a9-3463-11ea-b0bd-0242ac110005 to disappear Jan 11 11:11:15.293: INFO: Pod pod-0d1476a9-3463-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:11:15.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jsw2c" for this suite. Jan 11 11:11:21.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:11:21.366: INFO: namespace: e2e-tests-emptydir-jsw2c, resource: bindings, ignored listing per whitelist Jan 11 11:11:21.493: INFO: namespace e2e-tests-emptydir-jsw2c deletion completed in 6.187311779s • [SLOW TEST:20.390 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:11:21.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-rk8q8/secret-test-192e9153-3463-11ea-b0bd-0242ac110005 STEP: Creating a pod to test consume secrets Jan 11 11:11:21.753: INFO: Waiting up to 5m0s for pod "pod-configmaps-192fd35f-3463-11ea-b0bd-0242ac110005" in namespace "e2e-tests-secrets-rk8q8" to be "success or failure" Jan 11 11:11:21.770: INFO: Pod "pod-configmaps-192fd35f-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.899081ms Jan 11 11:11:24.000: INFO: Pod "pod-configmaps-192fd35f-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246732654s Jan 11 11:11:26.022: INFO: Pod "pod-configmaps-192fd35f-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26918392s Jan 11 11:11:28.081: INFO: Pod "pod-configmaps-192fd35f-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.327824755s Jan 11 11:11:30.103: INFO: Pod "pod-configmaps-192fd35f-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.349497389s Jan 11 11:11:32.667: INFO: Pod "pod-configmaps-192fd35f-3463-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.914038479s STEP: Saw pod success Jan 11 11:11:32.667: INFO: Pod "pod-configmaps-192fd35f-3463-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:11:32.675: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-192fd35f-3463-11ea-b0bd-0242ac110005 container env-test: STEP: delete the pod Jan 11 11:11:32.859: INFO: Waiting for pod pod-configmaps-192fd35f-3463-11ea-b0bd-0242ac110005 to disappear Jan 11 11:11:32.893: INFO: Pod pod-configmaps-192fd35f-3463-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:11:32.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rk8q8" for this suite. Jan 11 11:11:38.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:11:39.229: INFO: namespace: e2e-tests-secrets-rk8q8, resource: bindings, ignored listing per whitelist Jan 11 11:11:39.280: INFO: namespace e2e-tests-secrets-rk8q8 deletion completed in 6.364326212s • [SLOW TEST:17.787 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:11:39.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0111 11:11:42.125718 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 11:11:42.125: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:11:42.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-qs9sg" for this suite. Jan 11 11:11:48.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:11:48.719: INFO: namespace: e2e-tests-gc-qs9sg, resource: bindings, ignored listing per whitelist Jan 11 11:11:48.723: INFO: namespace e2e-tests-gc-qs9sg deletion completed in 6.590479156s • [SLOW TEST:9.443 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:11:48.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 11 11:11:48.978: INFO: Waiting up to 5m0s for pod "downwardapi-volume-295fee20-3463-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-fjn2m" to be "success or failure" Jan 11 11:11:48.992: INFO: Pod "downwardapi-volume-295fee20-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.767211ms Jan 11 11:11:51.017: INFO: Pod "downwardapi-volume-295fee20-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038783459s Jan 11 11:11:53.033: INFO: Pod "downwardapi-volume-295fee20-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05447542s Jan 11 11:11:55.060: INFO: Pod "downwardapi-volume-295fee20-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081091136s Jan 11 11:11:57.431: INFO: Pod "downwardapi-volume-295fee20-3463-11ea-b0bd-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.452036566s Jan 11 11:11:59.749: INFO: Pod "downwardapi-volume-295fee20-3463-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.770619559s STEP: Saw pod success Jan 11 11:11:59.749: INFO: Pod "downwardapi-volume-295fee20-3463-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:11:59.766: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-295fee20-3463-11ea-b0bd-0242ac110005 container client-container: STEP: delete the pod Jan 11 11:12:00.803: INFO: Waiting for pod downwardapi-volume-295fee20-3463-11ea-b0bd-0242ac110005 to disappear Jan 11 11:12:00.899: INFO: Pod downwardapi-volume-295fee20-3463-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:12:00.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fjn2m" for this suite. Jan 11 11:12:06.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:12:07.079: INFO: namespace: e2e-tests-downward-api-fjn2m, resource: bindings, ignored listing per whitelist Jan 11 11:12:07.092: INFO: namespace e2e-tests-downward-api-fjn2m deletion completed in 6.184657573s • [SLOW TEST:18.369 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:12:07.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 11 11:12:20.644: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:12:20.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-dmsdd" for this suite. Jan 11 11:12:47.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:12:47.305: INFO: namespace: e2e-tests-replicaset-dmsdd, resource: bindings, ignored listing per whitelist Jan 11 11:12:47.465: INFO: namespace e2e-tests-replicaset-dmsdd deletion completed in 26.536193298s • [SLOW TEST:40.373 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:12:47.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jan 11 11:12:47.709: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 11 11:12:47.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ccxc5' Jan 11 11:12:50.146: INFO: stderr: "" Jan 11 11:12:50.146: INFO: stdout: "service/redis-slave created\n" Jan 11 11:12:50.146: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 11 11:12:50.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ccxc5' Jan 11 11:12:50.546: INFO: stderr: "" Jan 11 11:12:50.546: INFO: stdout: "service/redis-master created\n" Jan 11 11:12:50.546: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 11 11:12:50.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ccxc5' Jan 11 11:12:51.083: INFO: stderr: "" Jan 11 11:12:51.083: INFO: stdout: "service/frontend created\n" Jan 11 11:12:51.083: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 11 11:12:51.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ccxc5' Jan 11 11:12:51.517: INFO: stderr: "" Jan 11 11:12:51.517: INFO: stdout: "deployment.extensions/frontend created\n" Jan 11 11:12:51.518: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 11 11:12:51.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ccxc5' Jan 11 11:12:51.928: INFO: stderr: "" Jan 11 11:12:51.928: INFO: stdout: "deployment.extensions/redis-master created\n" Jan 11 11:12:51.929: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 11 11:12:51.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ccxc5' Jan 11 11:12:53.187: INFO: stderr: "" Jan 11 11:12:53.187: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jan 11 11:12:53.187: INFO: Waiting for all frontend pods to be Running. Jan 11 11:13:23.239: INFO: Waiting for frontend to serve content. Jan 11 11:13:25.157: INFO: Trying to add a new entry to the guestbook. Jan 11 11:13:25.190: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 11 11:13:25.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ccxc5' Jan 11 11:13:25.440: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 11:13:25.440: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 11 11:13:25.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ccxc5' Jan 11 11:13:25.765: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 11:13:25.766: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 11 11:13:25.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ccxc5' Jan 11 11:13:25.951: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 11:13:25.951: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 11 11:13:25.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ccxc5' Jan 11 11:13:26.071: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 11:13:26.071: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 11 11:13:26.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ccxc5' Jan 11 11:13:26.257: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 11:13:26.257: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 11 11:13:26.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ccxc5' Jan 11 11:13:26.537: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 11:13:26.537: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:13:26.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ccxc5" for this suite. Jan 11 11:14:14.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:14:14.741: INFO: namespace: e2e-tests-kubectl-ccxc5, resource: bindings, ignored listing per whitelist Jan 11 11:14:14.894: INFO: namespace e2e-tests-kubectl-ccxc5 deletion completed in 48.28004289s • [SLOW TEST:87.428 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:14:14.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 11 11:14:31.397: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 11:14:31.426: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 11:14:33.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 11:14:33.615: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 11:14:35.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 11:14:35.484: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 11:14:37.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 11:14:37.445: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 11:14:39.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 11:14:39.442: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 11:14:41.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 11:14:41.444: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 11:14:43.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 11:14:43.441: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:14:43.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-wxbhp" for this suite. Jan 11 11:15:07.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:15:07.635: INFO: namespace: e2e-tests-container-lifecycle-hook-wxbhp, resource: bindings, ignored listing per whitelist Jan 11 11:15:07.683: INFO: namespace e2e-tests-container-lifecycle-hook-wxbhp deletion completed in 24.233269546s • [SLOW TEST:52.789 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:15:07.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9ffec9a5-3463-11ea-b0bd-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 11 11:15:07.902: INFO: Waiting up to 5m0s for pod "pod-configmaps-9fff7099-3463-11ea-b0bd-0242ac110005" in namespace "e2e-tests-configmap-z5qvm" to be "success or failure" Jan 11 11:15:07.946: INFO: Pod "pod-configmaps-9fff7099-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.239072ms Jan 11 11:15:09.992: INFO: Pod "pod-configmaps-9fff7099-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089189446s Jan 11 11:15:12.010: INFO: Pod "pod-configmaps-9fff7099-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107455213s Jan 11 11:15:14.053: INFO: Pod "pod-configmaps-9fff7099-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150596894s Jan 11 11:15:16.119: INFO: Pod "pod-configmaps-9fff7099-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.216567471s Jan 11 11:15:18.140: INFO: Pod "pod-configmaps-9fff7099-3463-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.237161764s STEP: Saw pod success Jan 11 11:15:18.140: INFO: Pod "pod-configmaps-9fff7099-3463-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:15:18.145: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9fff7099-3463-11ea-b0bd-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 11 11:15:18.713: INFO: Waiting for pod pod-configmaps-9fff7099-3463-11ea-b0bd-0242ac110005 to disappear Jan 11 11:15:18.723: INFO: Pod pod-configmaps-9fff7099-3463-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:15:18.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-z5qvm" for this suite. Jan 11 11:15:24.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:15:24.837: INFO: namespace: e2e-tests-configmap-z5qvm, resource: bindings, ignored listing per whitelist Jan 11 11:15:25.089: INFO: namespace e2e-tests-configmap-z5qvm deletion completed in 6.35729043s • [SLOW TEST:17.406 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:15:25.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jan 11 11:15:25.973: INFO: Waiting up to 5m0s for pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv" in namespace "e2e-tests-svcaccounts-r6hl4" to be "success or failure" Jan 11 11:15:25.990: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv": Phase="Pending", Reason="", readiness=false. Elapsed: 17.084146ms Jan 11 11:15:28.004: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030455891s Jan 11 11:15:30.016: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043257076s Jan 11 11:15:32.045: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071970546s Jan 11 11:15:34.882: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.908417275s Jan 11 11:15:37.217: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv": Phase="Pending", Reason="", readiness=false. Elapsed: 11.243702481s Jan 11 11:15:39.228: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv": Phase="Pending", Reason="", readiness=false. Elapsed: 13.25495759s Jan 11 11:15:41.243: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv": Phase="Pending", Reason="", readiness=false. Elapsed: 15.270012718s Jan 11 11:15:43.263: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.290314812s STEP: Saw pod success Jan 11 11:15:43.263: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv" satisfied condition "success or failure" Jan 11 11:15:43.269: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv container token-test: STEP: delete the pod Jan 11 11:15:44.063: INFO: Waiting for pod pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv to disappear Jan 11 11:15:44.400: INFO: Pod pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-6jvjv no longer exists STEP: Creating a pod to test consume service account root CA Jan 11 11:15:44.415: INFO: Waiting up to 5m0s for pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4" in namespace "e2e-tests-svcaccounts-r6hl4" to be "success or failure" Jan 11 11:15:44.477: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4": Phase="Pending", Reason="", readiness=false. Elapsed: 62.074426ms Jan 11 11:15:46.802: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387733312s Jan 11 11:15:48.828: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.413570058s Jan 11 11:15:50.842: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427372921s Jan 11 11:15:53.127: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.712359387s Jan 11 11:15:56.078: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.663120074s Jan 11 11:15:58.102: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.687401816s Jan 11 11:16:00.114: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.699179699s STEP: Saw pod success Jan 11 11:16:00.114: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4" satisfied condition "success or failure" Jan 11 11:16:00.133: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4 container root-ca-test: STEP: delete the pod Jan 11 11:16:00.837: INFO: Waiting for pod pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4 to disappear Jan 11 11:16:00.846: INFO: Pod pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-ndlk4 no longer exists STEP: Creating a pod to test consume service account namespace Jan 11 11:16:00.874: INFO: Waiting up to 5m0s for pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v" in namespace "e2e-tests-svcaccounts-r6hl4" to be "success or failure" Jan 11 11:16:00.890: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 15.90818ms Jan 11 11:16:02.906: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032205868s Jan 11 11:16:04.918: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044697591s Jan 11 11:16:07.401: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526816228s Jan 11 11:16:09.418: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543875856s Jan 11 11:16:11.583: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.709077338s Jan 11 11:16:14.303: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 13.4292276s Jan 11 11:16:16.319: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v": Phase="Pending", Reason="", readiness=false. Elapsed: 15.445033765s Jan 11 11:16:18.335: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.461711499s STEP: Saw pod success Jan 11 11:16:18.336: INFO: Pod "pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v" satisfied condition "success or failure" Jan 11 11:16:18.343: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v container namespace-test: STEP: delete the pod Jan 11 11:16:18.443: INFO: Waiting for pod pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v to disappear Jan 11 11:16:18.515: INFO: Pod pod-service-account-aac372b4-3463-11ea-b0bd-0242ac110005-x7j6v no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:16:18.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-r6hl4" for this suite. Jan 11 11:16:25.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:16:25.671: INFO: namespace: e2e-tests-svcaccounts-r6hl4, resource: bindings, ignored listing per whitelist Jan 11 11:16:25.695: INFO: namespace e2e-tests-svcaccounts-r6hl4 deletion completed in 7.168829388s • [SLOW TEST:60.605 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:16:25.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jan 11 11:16:25.963: INFO: Waiting up to 5m0s for pod "client-containers-ce8156be-3463-11ea-b0bd-0242ac110005" in namespace "e2e-tests-containers-6lrq4" to be "success or failure" Jan 11 11:16:26.011: INFO: Pod "client-containers-ce8156be-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.236875ms Jan 11 11:16:28.022: INFO: Pod "client-containers-ce8156be-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05918695s Jan 11 11:16:30.041: INFO: Pod "client-containers-ce8156be-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0779248s Jan 11 11:16:32.054: INFO: Pod "client-containers-ce8156be-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091615411s Jan 11 11:16:34.064: INFO: Pod "client-containers-ce8156be-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101596234s Jan 11 11:16:36.072: INFO: Pod "client-containers-ce8156be-3463-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10975287s STEP: Saw pod success Jan 11 11:16:36.072: INFO: Pod "client-containers-ce8156be-3463-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:16:36.076: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ce8156be-3463-11ea-b0bd-0242ac110005 container test-container: STEP: delete the pod Jan 11 11:16:36.615: INFO: Waiting for pod client-containers-ce8156be-3463-11ea-b0bd-0242ac110005 to disappear Jan 11 11:16:36.644: INFO: Pod client-containers-ce8156be-3463-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:16:36.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-6lrq4" for this suite. Jan 11 11:16:42.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:16:42.944: INFO: namespace: e2e-tests-containers-6lrq4, resource: bindings, ignored listing per whitelist Jan 11 11:16:42.983: INFO: namespace e2e-tests-containers-6lrq4 deletion completed in 6.330307191s • [SLOW TEST:17.288 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:16:42.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 11 11:16:43.291: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8d65856-3463-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-7kmz6" to be "success or failure" Jan 11 11:16:43.425: INFO: Pod "downwardapi-volume-d8d65856-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 133.428976ms Jan 11 11:16:46.116: INFO: Pod "downwardapi-volume-d8d65856-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.824825305s Jan 11 11:16:48.137: INFO: Pod "downwardapi-volume-d8d65856-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.846182414s Jan 11 11:16:50.357: INFO: Pod "downwardapi-volume-d8d65856-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.065641803s Jan 11 11:16:52.367: INFO: Pod "downwardapi-volume-d8d65856-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.075449691s Jan 11 11:16:54.380: INFO: Pod "downwardapi-volume-d8d65856-3463-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.088423923s STEP: Saw pod success Jan 11 11:16:54.380: INFO: Pod "downwardapi-volume-d8d65856-3463-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:16:54.384: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d8d65856-3463-11ea-b0bd-0242ac110005 container client-container: STEP: delete the pod Jan 11 11:16:54.509: INFO: Waiting for pod downwardapi-volume-d8d65856-3463-11ea-b0bd-0242ac110005 to disappear Jan 11 11:16:54.665: INFO: Pod downwardapi-volume-d8d65856-3463-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:16:54.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7kmz6" for this suite. Jan 11 11:17:02.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:17:02.289: INFO: namespace: e2e-tests-downward-api-7kmz6, resource: bindings, ignored listing per whitelist Jan 11 11:17:02.367: INFO: namespace e2e-tests-downward-api-7kmz6 deletion completed in 7.687000928s • [SLOW TEST:19.384 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:17:02.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e4702d75-3463-11ea-b0bd-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 11 11:17:02.740: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e471101e-3463-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-r6525" to be "success or failure" Jan 11 11:17:02.756: INFO: Pod "pod-projected-configmaps-e471101e-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.646805ms Jan 11 11:17:04.775: INFO: Pod "pod-projected-configmaps-e471101e-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035367039s Jan 11 11:17:06.790: INFO: Pod "pod-projected-configmaps-e471101e-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050584039s Jan 11 11:17:08.864: INFO: Pod "pod-projected-configmaps-e471101e-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124287136s Jan 11 11:17:10.885: INFO: Pod "pod-projected-configmaps-e471101e-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144946748s Jan 11 11:17:12.927: INFO: Pod "pod-projected-configmaps-e471101e-3463-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.187116095s STEP: Saw pod success Jan 11 11:17:12.927: INFO: Pod "pod-projected-configmaps-e471101e-3463-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:17:12.944: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e471101e-3463-11ea-b0bd-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 11 11:17:13.092: INFO: Waiting for pod pod-projected-configmaps-e471101e-3463-11ea-b0bd-0242ac110005 to disappear Jan 11 11:17:13.099: INFO: Pod pod-projected-configmaps-e471101e-3463-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:17:13.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r6525" for this suite. Jan 11 11:17:19.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:17:19.211: INFO: namespace: e2e-tests-projected-r6525, resource: bindings, ignored listing per whitelist Jan 11 11:17:19.333: INFO: namespace e2e-tests-projected-r6525 deletion completed in 6.227316754s • [SLOW TEST:16.965 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:17:19.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 11 11:17:19.645: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee83a8f3-3463-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-wc7wl" to be "success or failure" Jan 11 11:17:19.651: INFO: Pod "downwardapi-volume-ee83a8f3-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.405367ms Jan 11 11:17:21.685: INFO: Pod "downwardapi-volume-ee83a8f3-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040355198s Jan 11 11:17:23.698: INFO: Pod "downwardapi-volume-ee83a8f3-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053460129s Jan 11 11:17:25.715: INFO: Pod "downwardapi-volume-ee83a8f3-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06979854s Jan 11 11:17:27.730: INFO: Pod "downwardapi-volume-ee83a8f3-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085234366s Jan 11 11:17:29.736: INFO: Pod "downwardapi-volume-ee83a8f3-3463-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091394442s STEP: Saw pod success Jan 11 11:17:29.736: INFO: Pod "downwardapi-volume-ee83a8f3-3463-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:17:29.742: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ee83a8f3-3463-11ea-b0bd-0242ac110005 container client-container: STEP: delete the pod Jan 11 11:17:30.325: INFO: Waiting for pod downwardapi-volume-ee83a8f3-3463-11ea-b0bd-0242ac110005 to disappear Jan 11 11:17:30.340: INFO: Pod downwardapi-volume-ee83a8f3-3463-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:17:30.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wc7wl" for this suite. Jan 11 11:17:36.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:17:36.750: INFO: namespace: e2e-tests-projected-wc7wl, resource: bindings, ignored listing per whitelist Jan 11 11:17:36.843: INFO: namespace e2e-tests-projected-wc7wl deletion completed in 6.463741987s • [SLOW TEST:17.510 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:17:36.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 11 11:17:37.094: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8eb4af6-3463-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-58mll" to be "success or failure" Jan 11 11:17:37.110: INFO: Pod "downwardapi-volume-f8eb4af6-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.891223ms Jan 11 11:17:39.121: INFO: Pod "downwardapi-volume-f8eb4af6-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026331331s Jan 11 11:17:41.137: INFO: Pod "downwardapi-volume-f8eb4af6-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042534737s Jan 11 11:17:43.396: INFO: Pod "downwardapi-volume-f8eb4af6-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.30114425s Jan 11 11:17:45.438: INFO: Pod "downwardapi-volume-f8eb4af6-3463-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.344060219s Jan 11 11:17:47.457: INFO: Pod "downwardapi-volume-f8eb4af6-3463-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.362374824s STEP: Saw pod success Jan 11 11:17:47.457: INFO: Pod "downwardapi-volume-f8eb4af6-3463-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:17:47.463: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f8eb4af6-3463-11ea-b0bd-0242ac110005 container client-container: STEP: delete the pod Jan 11 11:17:47.534: INFO: Waiting for pod downwardapi-volume-f8eb4af6-3463-11ea-b0bd-0242ac110005 to disappear Jan 11 11:17:47.546: INFO: Pod downwardapi-volume-f8eb4af6-3463-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:17:47.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-58mll" for this suite. Jan 11 11:17:53.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:17:53.969: INFO: namespace: e2e-tests-downward-api-58mll, resource: bindings, ignored listing per whitelist Jan 11 11:17:54.052: INFO: namespace e2e-tests-downward-api-58mll deletion completed in 6.438507182s • [SLOW TEST:17.209 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:17:54.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 11 11:17:54.331: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:18:04.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-5kqbh" for this suite. Jan 11 11:18:54.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:18:54.674: INFO: namespace: e2e-tests-pods-5kqbh, resource: bindings, ignored listing per whitelist Jan 11 11:18:54.691: INFO: namespace e2e-tests-pods-5kqbh deletion completed in 50.263072024s • [SLOW TEST:60.639 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:18:54.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jan 11 11:18:54.906: INFO: Waiting up to 5m0s for pod "client-containers-27438066-3464-11ea-b0bd-0242ac110005" in namespace "e2e-tests-containers-n54pd" to be "success or failure" Jan 11 11:18:54.921: INFO: Pod "client-containers-27438066-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.526003ms Jan 11 11:18:57.210: INFO: Pod "client-containers-27438066-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303345801s Jan 11 11:18:59.236: INFO: Pod "client-containers-27438066-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32945409s Jan 11 11:19:01.464: INFO: Pod "client-containers-27438066-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.557938016s Jan 11 11:19:03.990: INFO: Pod "client-containers-27438066-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.083817579s Jan 11 11:19:06.013: INFO: Pod "client-containers-27438066-3464-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.106878191s STEP: Saw pod success Jan 11 11:19:06.013: INFO: Pod "client-containers-27438066-3464-11ea-b0bd-0242ac110005" satisfied condition "success or failure" Jan 11 11:19:06.022: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-27438066-3464-11ea-b0bd-0242ac110005 container test-container: STEP: delete the pod Jan 11 11:19:06.241: INFO: Waiting for pod client-containers-27438066-3464-11ea-b0bd-0242ac110005 to disappear Jan 11 11:19:06.259: INFO: Pod client-containers-27438066-3464-11ea-b0bd-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:19:06.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-n54pd" for this suite. Jan 11 11:19:12.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:19:12.640: INFO: namespace: e2e-tests-containers-n54pd, resource: bindings, ignored listing per whitelist Jan 11 11:19:12.717: INFO: namespace e2e-tests-containers-n54pd deletion completed in 6.435209698s • [SLOW TEST:18.026 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:19:12.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 11 11:19:12.843: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 11 11:19:12.855: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 11 11:19:18.786: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 11 11:19:22.815: INFO: Creating deployment "test-rolling-update-deployment" Jan 11 11:19:22.839: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 11 11:19:22.916: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 11 11:19:24.957: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 11 11:19:25.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338363, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338363, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338363, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338362, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 11:19:27.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338363, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338363, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338363, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338362, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 11:19:29.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338363, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338363, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338363, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338362, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 11:19:31.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338363, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338363, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338363, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714338362, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 11:19:33.669: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 11 11:19:33.690: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-rwm9c,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rwm9c/deployments/test-rolling-update-deployment,UID:37f2c749-3464-11ea-a994-fa163e34d433,ResourceVersion:17915645,Generation:1,CreationTimestamp:2020-01-11 11:19:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-11 11:19:23 +0000 UTC 2020-01-11 11:19:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-11 11:19:33 +0000 UTC 2020-01-11 11:19:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 11 11:19:33.694: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-rwm9c,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rwm9c/replicasets/test-rolling-update-deployment-75db98fb4c,UID:3808cc8b-3464-11ea-a994-fa163e34d433,ResourceVersion:17915635,Generation:1,CreationTimestamp:2020-01-11 11:19:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 37f2c749-3464-11ea-a994-fa163e34d433 0xc0020d2557 0xc0020d2558}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 11 11:19:33.694: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 11 11:19:33.694: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-rwm9c,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rwm9c/replicasets/test-rolling-update-controller,UID:3200f259-3464-11ea-a994-fa163e34d433,ResourceVersion:17915644,Generation:2,CreationTimestamp:2020-01-11 11:19:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 37f2c749-3464-11ea-a994-fa163e34d433 0xc0020d2497 0xc0020d2498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 11 11:19:33.700: INFO: Pod "test-rolling-update-deployment-75db98fb4c-pkhkp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-pkhkp,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-rwm9c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rwm9c/pods/test-rolling-update-deployment-75db98fb4c-pkhkp,UID:38181f55-3464-11ea-a994-fa163e34d433,ResourceVersion:17915634,Generation:0,CreationTimestamp:2020-01-11 11:19:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 3808cc8b-3464-11ea-a994-fa163e34d433 0xc0020d2e57 0xc0020d2e58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hrlvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hrlvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-hrlvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020d2ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020d2ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 11:19:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 11:19:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 11:19:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 11:19:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-11 11:19:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-11 11:19:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://7d6991363f3aaf95e66cfa0668fe2ed07c4684b656eaa05515ca1ac669725842}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:19:33.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-rwm9c" for this suite. Jan 11 11:19:42.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:19:42.501: INFO: namespace: e2e-tests-deployment-rwm9c, resource: bindings, ignored listing per whitelist Jan 11 11:19:42.634: INFO: namespace e2e-tests-deployment-rwm9c deletion completed in 8.929226878s • [SLOW TEST:29.916 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:19:42.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0111 11:20:14.143493 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 11 11:20:14.143: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:20:14.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-j5rnr" for this suite. Jan 11 11:20:22.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:20:22.641: INFO: namespace: e2e-tests-gc-j5rnr, resource: bindings, ignored listing per whitelist Jan 11 11:20:22.701: INFO: namespace e2e-tests-gc-j5rnr deletion completed in 8.553212787s • [SLOW TEST:40.067 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:20:22.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 11 11:20:22.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-lzlpq' Jan 11 11:20:23.101: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 11 11:20:23.101: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jan 11 11:20:25.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lzlpq' Jan 11 11:20:26.126: INFO: stderr: "" Jan 11 11:20:26.126: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 11 11:20:26.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lzlpq" for this suite. Jan 11 11:20:32.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 11 11:20:32.345: INFO: namespace: e2e-tests-kubectl-lzlpq, resource: bindings, ignored listing per whitelist Jan 11 11:20:32.430: INFO: namespace e2e-tests-kubectl-lzlpq deletion completed in 6.28143667s • [SLOW TEST:9.729 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 11 11:20:32.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 11 11:20:32.789: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 28.068542ms)
Jan 11 11:20:32.902: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 113.454682ms)
Jan 11 11:20:32.920: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.164444ms)
Jan 11 11:20:32.931: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.670004ms)
Jan 11 11:20:32.935: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.971355ms)
Jan 11 11:20:32.939: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.789184ms)
Jan 11 11:20:32.949: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.898886ms)
Jan 11 11:20:33.045: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 95.850511ms)
Jan 11 11:20:33.053: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.734054ms)
Jan 11 11:20:33.059: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.030873ms)
Jan 11 11:20:33.066: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.793526ms)
Jan 11 11:20:33.073: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.189627ms)
Jan 11 11:20:33.078: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.934337ms)
Jan 11 11:20:33.085: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.752323ms)
Jan 11 11:20:33.091: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.390396ms)
Jan 11 11:20:33.097: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.338715ms)
Jan 11 11:20:33.105: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.232223ms)
Jan 11 11:20:33.110: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.489699ms)
Jan 11 11:20:33.115: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.94082ms)
Jan 11 11:20:33.120: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.783651ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:20:33.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-djdsz" for this suite.
Jan 11 11:20:39.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:20:39.494: INFO: namespace: e2e-tests-proxy-djdsz, resource: bindings, ignored listing per whitelist
Jan 11 11:20:39.585: INFO: namespace e2e-tests-proxy-djdsz deletion completed in 6.457027722s

• [SLOW TEST:7.155 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:20:39.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 11:20:39.807: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65d33e0c-3464-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-h29bj" to be "success or failure"
Jan 11 11:20:39.971: INFO: Pod "downwardapi-volume-65d33e0c-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 163.851699ms
Jan 11 11:20:41.998: INFO: Pod "downwardapi-volume-65d33e0c-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191471641s
Jan 11 11:20:44.025: INFO: Pod "downwardapi-volume-65d33e0c-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218310747s
Jan 11 11:20:46.119: INFO: Pod "downwardapi-volume-65d33e0c-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.312135286s
Jan 11 11:20:48.262: INFO: Pod "downwardapi-volume-65d33e0c-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.455320368s
Jan 11 11:20:50.465: INFO: Pod "downwardapi-volume-65d33e0c-3464-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.657860075s
STEP: Saw pod success
Jan 11 11:20:50.465: INFO: Pod "downwardapi-volume-65d33e0c-3464-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:20:50.479: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-65d33e0c-3464-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 11:20:50.640: INFO: Waiting for pod downwardapi-volume-65d33e0c-3464-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:20:50.666: INFO: Pod downwardapi-volume-65d33e0c-3464-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:20:50.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h29bj" for this suite.
Jan 11 11:20:56.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:20:56.912: INFO: namespace: e2e-tests-projected-h29bj, resource: bindings, ignored listing per whitelist
Jan 11 11:20:56.962: INFO: namespace e2e-tests-projected-h29bj deletion completed in 6.203532664s

• [SLOW TEST:17.376 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:20:56.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-nj6ct
I0111 11:20:57.404119       9 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-nj6ct, replica count: 1
I0111 11:20:58.454806       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 11:20:59.455156       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 11:21:00.455426       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 11:21:01.455766       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 11:21:02.456179       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 11:21:03.456490       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 11:21:04.456833       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 11:21:05.457296       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 11:21:06.457641       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 11 11:21:07.097: INFO: Created: latency-svc-g78p8
Jan 11 11:21:07.361: INFO: Got endpoints: latency-svc-g78p8 [803.39163ms]
Jan 11 11:21:07.520: INFO: Created: latency-svc-rfctn
Jan 11 11:21:07.533: INFO: Got endpoints: latency-svc-rfctn [171.388475ms]
Jan 11 11:21:07.723: INFO: Created: latency-svc-66hkh
Jan 11 11:21:07.753: INFO: Got endpoints: latency-svc-66hkh [391.689274ms]
Jan 11 11:21:07.777: INFO: Created: latency-svc-rrpv9
Jan 11 11:21:07.888: INFO: Got endpoints: latency-svc-rrpv9 [526.327301ms]
Jan 11 11:21:07.904: INFO: Created: latency-svc-gtrld
Jan 11 11:21:07.927: INFO: Got endpoints: latency-svc-gtrld [565.210092ms]
Jan 11 11:21:08.066: INFO: Created: latency-svc-nl69k
Jan 11 11:21:08.071: INFO: Got endpoints: latency-svc-nl69k [709.166115ms]
Jan 11 11:21:08.130: INFO: Created: latency-svc-tqtz4
Jan 11 11:21:08.232: INFO: Got endpoints: latency-svc-tqtz4 [870.766695ms]
Jan 11 11:21:08.288: INFO: Created: latency-svc-kl4wx
Jan 11 11:21:08.307: INFO: Got endpoints: latency-svc-kl4wx [945.478076ms]
Jan 11 11:21:08.539: INFO: Created: latency-svc-lt9d5
Jan 11 11:21:08.589: INFO: Got endpoints: latency-svc-lt9d5 [1.227282132s]
Jan 11 11:21:08.799: INFO: Created: latency-svc-bqp2d
Jan 11 11:21:08.825: INFO: Got endpoints: latency-svc-bqp2d [1.463650043s]
Jan 11 11:21:08.979: INFO: Created: latency-svc-pjxrv
Jan 11 11:21:09.053: INFO: Got endpoints: latency-svc-pjxrv [1.691105569s]
Jan 11 11:21:09.073: INFO: Created: latency-svc-m6jlg
Jan 11 11:21:09.188: INFO: Got endpoints: latency-svc-m6jlg [1.826127429s]
Jan 11 11:21:09.211: INFO: Created: latency-svc-s26sc
Jan 11 11:21:09.422: INFO: Got endpoints: latency-svc-s26sc [2.060073508s]
Jan 11 11:21:09.441: INFO: Created: latency-svc-86llc
Jan 11 11:21:09.463: INFO: Got endpoints: latency-svc-86llc [2.101145598s]
Jan 11 11:21:09.515: INFO: Created: latency-svc-66flg
Jan 11 11:21:09.645: INFO: Got endpoints: latency-svc-66flg [2.28342473s]
Jan 11 11:21:09.699: INFO: Created: latency-svc-4vp6b
Jan 11 11:21:09.955: INFO: Got endpoints: latency-svc-4vp6b [2.593036629s]
Jan 11 11:21:10.000: INFO: Created: latency-svc-c257t
Jan 11 11:21:10.036: INFO: Got endpoints: latency-svc-c257t [2.503123597s]
Jan 11 11:21:10.193: INFO: Created: latency-svc-7ttbw
Jan 11 11:21:10.221: INFO: Got endpoints: latency-svc-7ttbw [266.016911ms]
Jan 11 11:21:10.453: INFO: Created: latency-svc-62whl
Jan 11 11:21:10.501: INFO: Got endpoints: latency-svc-62whl [2.748148031s]
Jan 11 11:21:10.689: INFO: Created: latency-svc-6r2kl
Jan 11 11:21:10.916: INFO: Created: latency-svc-bnfjj
Jan 11 11:21:10.924: INFO: Got endpoints: latency-svc-6r2kl [3.036368705s]
Jan 11 11:21:10.936: INFO: Got endpoints: latency-svc-bnfjj [3.008727799s]
Jan 11 11:21:11.166: INFO: Created: latency-svc-hhfg2
Jan 11 11:21:11.169: INFO: Got endpoints: latency-svc-hhfg2 [3.09833765s]
Jan 11 11:21:11.225: INFO: Created: latency-svc-dcltx
Jan 11 11:21:11.255: INFO: Got endpoints: latency-svc-dcltx [3.022223404s]
Jan 11 11:21:11.536: INFO: Created: latency-svc-zqkhv
Jan 11 11:21:11.573: INFO: Got endpoints: latency-svc-zqkhv [3.26623452s]
Jan 11 11:21:11.775: INFO: Created: latency-svc-r7kxk
Jan 11 11:21:11.802: INFO: Got endpoints: latency-svc-r7kxk [3.213368104s]
Jan 11 11:21:12.004: INFO: Created: latency-svc-sbt76
Jan 11 11:21:12.070: INFO: Got endpoints: latency-svc-sbt76 [3.244787234s]
Jan 11 11:21:12.243: INFO: Created: latency-svc-dg9f2
Jan 11 11:21:12.410: INFO: Created: latency-svc-wl7hr
Jan 11 11:21:12.410: INFO: Got endpoints: latency-svc-dg9f2 [3.3569355s]
Jan 11 11:21:12.426: INFO: Got endpoints: latency-svc-wl7hr [3.237849149s]
Jan 11 11:21:12.479: INFO: Created: latency-svc-rsjkh
Jan 11 11:21:12.610: INFO: Got endpoints: latency-svc-rsjkh [3.188177765s]
Jan 11 11:21:12.619: INFO: Created: latency-svc-plhms
Jan 11 11:21:12.645: INFO: Got endpoints: latency-svc-plhms [3.182320623s]
Jan 11 11:21:12.682: INFO: Created: latency-svc-6nfw8
Jan 11 11:21:12.693: INFO: Got endpoints: latency-svc-6nfw8 [3.047299746s]
Jan 11 11:21:12.806: INFO: Created: latency-svc-cv4gf
Jan 11 11:21:12.844: INFO: Got endpoints: latency-svc-cv4gf [2.807898779s]
Jan 11 11:21:12.997: INFO: Created: latency-svc-q7f8z
Jan 11 11:21:13.065: INFO: Created: latency-svc-dkz2z
Jan 11 11:21:13.068: INFO: Got endpoints: latency-svc-q7f8z [2.846947164s]
Jan 11 11:21:13.161: INFO: Got endpoints: latency-svc-dkz2z [2.659838457s]
Jan 11 11:21:13.183: INFO: Created: latency-svc-pg6bz
Jan 11 11:21:13.198: INFO: Got endpoints: latency-svc-pg6bz [2.273717199s]
Jan 11 11:21:13.250: INFO: Created: latency-svc-9prkd
Jan 11 11:21:13.268: INFO: Got endpoints: latency-svc-9prkd [2.332594942s]
Jan 11 11:21:13.408: INFO: Created: latency-svc-5vjwv
Jan 11 11:21:13.481: INFO: Got endpoints: latency-svc-5vjwv [2.311526433s]
Jan 11 11:21:13.485: INFO: Created: latency-svc-5v7rs
Jan 11 11:21:13.576: INFO: Got endpoints: latency-svc-5v7rs [2.32173626s]
Jan 11 11:21:13.634: INFO: Created: latency-svc-pzgbn
Jan 11 11:21:13.656: INFO: Got endpoints: latency-svc-pzgbn [2.081987776s]
Jan 11 11:21:13.874: INFO: Created: latency-svc-99ngx
Jan 11 11:21:14.103: INFO: Got endpoints: latency-svc-99ngx [2.300725897s]
Jan 11 11:21:14.161: INFO: Created: latency-svc-w4vzm
Jan 11 11:21:14.180: INFO: Got endpoints: latency-svc-w4vzm [2.109785525s]
Jan 11 11:21:14.339: INFO: Created: latency-svc-jvfpd
Jan 11 11:21:14.395: INFO: Got endpoints: latency-svc-jvfpd [1.984830244s]
Jan 11 11:21:14.409: INFO: Created: latency-svc-lts6c
Jan 11 11:21:14.412: INFO: Got endpoints: latency-svc-lts6c [1.986628848s]
Jan 11 11:21:14.523: INFO: Created: latency-svc-8z596
Jan 11 11:21:14.559: INFO: Got endpoints: latency-svc-8z596 [1.949020516s]
Jan 11 11:21:14.723: INFO: Created: latency-svc-72xsc
Jan 11 11:21:14.734: INFO: Got endpoints: latency-svc-72xsc [2.088972132s]
Jan 11 11:21:14.958: INFO: Created: latency-svc-q84gx
Jan 11 11:21:14.967: INFO: Created: latency-svc-xxhvq
Jan 11 11:21:14.981: INFO: Got endpoints: latency-svc-q84gx [2.287891262s]
Jan 11 11:21:14.985: INFO: Got endpoints: latency-svc-xxhvq [2.140777628s]
Jan 11 11:21:15.122: INFO: Created: latency-svc-h8k7f
Jan 11 11:21:15.137: INFO: Got endpoints: latency-svc-h8k7f [2.06905574s]
Jan 11 11:21:15.194: INFO: Created: latency-svc-ncvcb
Jan 11 11:21:15.363: INFO: Got endpoints: latency-svc-ncvcb [2.201623258s]
Jan 11 11:21:15.417: INFO: Created: latency-svc-rxrm7
Jan 11 11:21:15.452: INFO: Got endpoints: latency-svc-rxrm7 [2.253410026s]
Jan 11 11:21:15.723: INFO: Created: latency-svc-x6btk
Jan 11 11:21:15.740: INFO: Got endpoints: latency-svc-x6btk [2.471376236s]
Jan 11 11:21:15.912: INFO: Created: latency-svc-pz7d4
Jan 11 11:21:15.925: INFO: Got endpoints: latency-svc-pz7d4 [2.444808734s]
Jan 11 11:21:16.093: INFO: Created: latency-svc-f7nrg
Jan 11 11:21:16.098: INFO: Got endpoints: latency-svc-f7nrg [2.520984108s]
Jan 11 11:21:16.162: INFO: Created: latency-svc-2bv6x
Jan 11 11:21:16.275: INFO: Got endpoints: latency-svc-2bv6x [2.618709292s]
Jan 11 11:21:16.349: INFO: Created: latency-svc-82h24
Jan 11 11:21:16.453: INFO: Got endpoints: latency-svc-82h24 [2.349294141s]
Jan 11 11:21:16.488: INFO: Created: latency-svc-nkcz7
Jan 11 11:21:16.535: INFO: Got endpoints: latency-svc-nkcz7 [2.354767195s]
Jan 11 11:21:16.643: INFO: Created: latency-svc-vrvvt
Jan 11 11:21:16.665: INFO: Got endpoints: latency-svc-vrvvt [2.269889549s]
Jan 11 11:21:16.712: INFO: Created: latency-svc-8gb4n
Jan 11 11:21:16.717: INFO: Got endpoints: latency-svc-8gb4n [2.304384014s]
Jan 11 11:21:16.823: INFO: Created: latency-svc-xklgq
Jan 11 11:21:16.882: INFO: Created: latency-svc-48qtm
Jan 11 11:21:16.892: INFO: Got endpoints: latency-svc-xklgq [2.332641774s]
Jan 11 11:21:17.012: INFO: Got endpoints: latency-svc-48qtm [2.277609686s]
Jan 11 11:21:17.039: INFO: Created: latency-svc-knplg
Jan 11 11:21:17.059: INFO: Got endpoints: latency-svc-knplg [2.074299334s]
Jan 11 11:21:17.145: INFO: Created: latency-svc-xw69t
Jan 11 11:21:17.238: INFO: Created: latency-svc-wvnmm
Jan 11 11:21:17.331: INFO: Created: latency-svc-brhfx
Jan 11 11:21:17.461: INFO: Got endpoints: latency-svc-xw69t [2.479879961s]
Jan 11 11:21:17.471: INFO: Got endpoints: latency-svc-wvnmm [2.33374882s]
Jan 11 11:21:17.492: INFO: Got endpoints: latency-svc-brhfx [2.128397593s]
Jan 11 11:21:17.501: INFO: Created: latency-svc-s7mmp
Jan 11 11:21:17.627: INFO: Got endpoints: latency-svc-s7mmp [2.1753657s]
Jan 11 11:21:17.844: INFO: Created: latency-svc-r2qm9
Jan 11 11:21:17.862: INFO: Created: latency-svc-hrl5p
Jan 11 11:21:17.869: INFO: Got endpoints: latency-svc-r2qm9 [2.129449838s]
Jan 11 11:21:17.887: INFO: Got endpoints: latency-svc-hrl5p [1.961117658s]
Jan 11 11:21:18.034: INFO: Created: latency-svc-fj2xj
Jan 11 11:21:18.057: INFO: Got endpoints: latency-svc-fj2xj [1.959272054s]
Jan 11 11:21:18.240: INFO: Created: latency-svc-bnp77
Jan 11 11:21:18.251: INFO: Got endpoints: latency-svc-bnp77 [1.976713931s]
Jan 11 11:21:18.300: INFO: Created: latency-svc-l64rc
Jan 11 11:21:18.450: INFO: Got endpoints: latency-svc-l64rc [1.997205684s]
Jan 11 11:21:18.546: INFO: Created: latency-svc-h475t
Jan 11 11:21:18.641: INFO: Got endpoints: latency-svc-h475t [2.105864757s]
Jan 11 11:21:18.732: INFO: Created: latency-svc-2kbmk
Jan 11 11:21:18.893: INFO: Got endpoints: latency-svc-2kbmk [2.22835353s]
Jan 11 11:21:18.956: INFO: Created: latency-svc-zg4tf
Jan 11 11:21:19.062: INFO: Got endpoints: latency-svc-zg4tf [2.34464781s]
Jan 11 11:21:19.137: INFO: Created: latency-svc-gckkx
Jan 11 11:21:19.265: INFO: Got endpoints: latency-svc-gckkx [2.373165094s]
Jan 11 11:21:19.338: INFO: Created: latency-svc-6wbtt
Jan 11 11:21:19.350: INFO: Got endpoints: latency-svc-6wbtt [2.338161383s]
Jan 11 11:21:19.455: INFO: Created: latency-svc-7nbtr
Jan 11 11:21:19.470: INFO: Got endpoints: latency-svc-7nbtr [2.411233039s]
Jan 11 11:21:19.534: INFO: Created: latency-svc-847l6
Jan 11 11:21:19.693: INFO: Got endpoints: latency-svc-847l6 [2.232124096s]
Jan 11 11:21:19.762: INFO: Created: latency-svc-4xdcv
Jan 11 11:21:19.787: INFO: Created: latency-svc-jd565
Jan 11 11:21:19.908: INFO: Got endpoints: latency-svc-4xdcv [2.436765423s]
Jan 11 11:21:19.915: INFO: Got endpoints: latency-svc-jd565 [2.42285636s]
Jan 11 11:21:19.940: INFO: Created: latency-svc-4dlg5
Jan 11 11:21:19.951: INFO: Got endpoints: latency-svc-4dlg5 [2.324184868s]
Jan 11 11:21:19.993: INFO: Created: latency-svc-zmg48
Jan 11 11:21:20.092: INFO: Got endpoints: latency-svc-zmg48 [2.222995279s]
Jan 11 11:21:20.120: INFO: Created: latency-svc-df66t
Jan 11 11:21:20.157: INFO: Got endpoints: latency-svc-df66t [2.270489069s]
Jan 11 11:21:20.177: INFO: Created: latency-svc-9pq6c
Jan 11 11:21:20.182: INFO: Got endpoints: latency-svc-9pq6c [2.124507802s]
Jan 11 11:21:20.349: INFO: Created: latency-svc-5wt75
Jan 11 11:21:20.376: INFO: Got endpoints: latency-svc-5wt75 [2.124933263s]
Jan 11 11:21:20.455: INFO: Created: latency-svc-5wvrz
Jan 11 11:21:20.540: INFO: Got endpoints: latency-svc-5wvrz [2.089829034s]
Jan 11 11:21:20.558: INFO: Created: latency-svc-cccs2
Jan 11 11:21:20.577: INFO: Got endpoints: latency-svc-cccs2 [1.93600702s]
Jan 11 11:21:20.733: INFO: Created: latency-svc-lw4pc
Jan 11 11:21:20.759: INFO: Got endpoints: latency-svc-lw4pc [1.865222332s]
Jan 11 11:21:21.072: INFO: Created: latency-svc-rlmll
Jan 11 11:21:21.106: INFO: Got endpoints: latency-svc-rlmll [2.043991736s]
Jan 11 11:21:21.385: INFO: Created: latency-svc-hd4qp
Jan 11 11:21:21.450: INFO: Got endpoints: latency-svc-hd4qp [2.184297687s]
Jan 11 11:21:21.564: INFO: Created: latency-svc-fpn7d
Jan 11 11:21:21.606: INFO: Got endpoints: latency-svc-fpn7d [2.255657156s]
Jan 11 11:21:21.788: INFO: Created: latency-svc-x4js8
Jan 11 11:21:21.805: INFO: Got endpoints: latency-svc-x4js8 [2.334735451s]
Jan 11 11:21:22.109: INFO: Created: latency-svc-7b9b8
Jan 11 11:21:22.168: INFO: Got endpoints: latency-svc-7b9b8 [2.474690916s]
Jan 11 11:21:22.365: INFO: Created: latency-svc-mbw66
Jan 11 11:21:22.445: INFO: Got endpoints: latency-svc-mbw66 [2.536706137s]
Jan 11 11:21:22.467: INFO: Created: latency-svc-65l8d
Jan 11 11:21:22.620: INFO: Got endpoints: latency-svc-65l8d [2.705644637s]
Jan 11 11:21:22.658: INFO: Created: latency-svc-kgzcv
Jan 11 11:21:22.696: INFO: Got endpoints: latency-svc-kgzcv [2.744278937s]
Jan 11 11:21:22.879: INFO: Created: latency-svc-jslxx
Jan 11 11:21:22.949: INFO: Created: latency-svc-ml6zh
Jan 11 11:21:23.065: INFO: Got endpoints: latency-svc-jslxx [2.972732014s]
Jan 11 11:21:23.076: INFO: Got endpoints: latency-svc-ml6zh [2.919294918s]
Jan 11 11:21:23.147: INFO: Created: latency-svc-gvntn
Jan 11 11:21:23.286: INFO: Got endpoints: latency-svc-gvntn [3.104031285s]
Jan 11 11:21:23.296: INFO: Created: latency-svc-dcmfr
Jan 11 11:21:23.314: INFO: Got endpoints: latency-svc-dcmfr [2.937108963s]
Jan 11 11:21:23.367: INFO: Created: latency-svc-ng8js
Jan 11 11:21:23.471: INFO: Got endpoints: latency-svc-ng8js [2.931159499s]
Jan 11 11:21:23.487: INFO: Created: latency-svc-bjq9c
Jan 11 11:21:23.495: INFO: Got endpoints: latency-svc-bjq9c [2.918285032s]
Jan 11 11:21:23.541: INFO: Created: latency-svc-jz6ft
Jan 11 11:21:23.553: INFO: Got endpoints: latency-svc-jz6ft [2.7944986s]
Jan 11 11:21:23.740: INFO: Created: latency-svc-lq8wg
Jan 11 11:21:23.764: INFO: Got endpoints: latency-svc-lq8wg [2.658131131s]
Jan 11 11:21:24.050: INFO: Created: latency-svc-cdlrf
Jan 11 11:21:24.089: INFO: Got endpoints: latency-svc-cdlrf [2.638670765s]
Jan 11 11:21:24.265: INFO: Created: latency-svc-drhk7
Jan 11 11:21:24.283: INFO: Got endpoints: latency-svc-drhk7 [2.677062969s]
Jan 11 11:21:24.336: INFO: Created: latency-svc-nvvjp
Jan 11 11:21:24.349: INFO: Got endpoints: latency-svc-nvvjp [2.543314408s]
Jan 11 11:21:24.490: INFO: Created: latency-svc-dm5z4
Jan 11 11:21:24.501: INFO: Got endpoints: latency-svc-dm5z4 [2.333010452s]
Jan 11 11:21:24.647: INFO: Created: latency-svc-phhr8
Jan 11 11:21:24.661: INFO: Got endpoints: latency-svc-phhr8 [2.216431556s]
Jan 11 11:21:24.825: INFO: Created: latency-svc-8r9fs
Jan 11 11:21:24.900: INFO: Got endpoints: latency-svc-8r9fs [2.279648633s]
Jan 11 11:21:25.311: INFO: Created: latency-svc-sjwvc
Jan 11 11:21:25.438: INFO: Got endpoints: latency-svc-sjwvc [2.742197543s]
Jan 11 11:21:25.645: INFO: Created: latency-svc-hnk8x
Jan 11 11:21:25.729: INFO: Created: latency-svc-9wbjn
Jan 11 11:21:25.859: INFO: Got endpoints: latency-svc-hnk8x [2.793339352s]
Jan 11 11:21:25.871: INFO: Got endpoints: latency-svc-9wbjn [2.794729602s]
Jan 11 11:21:26.032: INFO: Created: latency-svc-8kvjp
Jan 11 11:21:26.067: INFO: Got endpoints: latency-svc-8kvjp [2.780826613s]
Jan 11 11:21:26.262: INFO: Created: latency-svc-vnjrb
Jan 11 11:21:26.283: INFO: Got endpoints: latency-svc-vnjrb [2.96959411s]
Jan 11 11:21:26.490: INFO: Created: latency-svc-27k5n
Jan 11 11:21:26.503: INFO: Got endpoints: latency-svc-27k5n [3.031854334s]
Jan 11 11:21:26.668: INFO: Created: latency-svc-6tvwx
Jan 11 11:21:26.678: INFO: Got endpoints: latency-svc-6tvwx [3.182763074s]
Jan 11 11:21:26.749: INFO: Created: latency-svc-24lb2
Jan 11 11:21:26.834: INFO: Got endpoints: latency-svc-24lb2 [3.28088108s]
Jan 11 11:21:26.882: INFO: Created: latency-svc-78wkd
Jan 11 11:21:26.935: INFO: Got endpoints: latency-svc-78wkd [3.170571796s]
Jan 11 11:21:26.957: INFO: Created: latency-svc-zlztw
Jan 11 11:21:27.023: INFO: Got endpoints: latency-svc-zlztw [2.934536233s]
Jan 11 11:21:27.059: INFO: Created: latency-svc-svqjj
Jan 11 11:21:27.094: INFO: Got endpoints: latency-svc-svqjj [2.810935144s]
Jan 11 11:21:27.227: INFO: Created: latency-svc-mhm8k
Jan 11 11:21:27.247: INFO: Got endpoints: latency-svc-mhm8k [2.898424202s]
Jan 11 11:21:27.289: INFO: Created: latency-svc-4zw4s
Jan 11 11:21:27.399: INFO: Got endpoints: latency-svc-4zw4s [2.89730653s]
Jan 11 11:21:27.406: INFO: Created: latency-svc-s4j7g
Jan 11 11:21:27.432: INFO: Got endpoints: latency-svc-s4j7g [2.770817403s]
Jan 11 11:21:27.496: INFO: Created: latency-svc-4spgz
Jan 11 11:21:27.618: INFO: Created: latency-svc-k4gff
Jan 11 11:21:27.659: INFO: Got endpoints: latency-svc-4spgz [2.758749015s]
Jan 11 11:21:27.667: INFO: Got endpoints: latency-svc-k4gff [2.229056759s]
Jan 11 11:21:27.767: INFO: Created: latency-svc-4v7kf
Jan 11 11:21:27.805: INFO: Got endpoints: latency-svc-4v7kf [1.946553859s]
Jan 11 11:21:28.016: INFO: Created: latency-svc-d64kg
Jan 11 11:21:28.045: INFO: Got endpoints: latency-svc-d64kg [2.174101549s]
Jan 11 11:21:28.094: INFO: Created: latency-svc-j446d
Jan 11 11:21:28.186: INFO: Got endpoints: latency-svc-j446d [2.118983785s]
Jan 11 11:21:28.216: INFO: Created: latency-svc-c9r4w
Jan 11 11:21:28.244: INFO: Got endpoints: latency-svc-c9r4w [1.960614522s]
Jan 11 11:21:28.439: INFO: Created: latency-svc-8l99h
Jan 11 11:21:28.467: INFO: Got endpoints: latency-svc-8l99h [1.963700594s]
Jan 11 11:21:28.624: INFO: Created: latency-svc-4xz2p
Jan 11 11:21:28.659: INFO: Created: latency-svc-r6qsx
Jan 11 11:21:28.666: INFO: Got endpoints: latency-svc-4xz2p [1.98769174s]
Jan 11 11:21:28.763: INFO: Got endpoints: latency-svc-r6qsx [1.929102453s]
Jan 11 11:21:28.795: INFO: Created: latency-svc-75h6t
Jan 11 11:21:28.804: INFO: Got endpoints: latency-svc-75h6t [1.868853595s]
Jan 11 11:21:28.837: INFO: Created: latency-svc-lkrkj
Jan 11 11:21:28.966: INFO: Got endpoints: latency-svc-lkrkj [1.943114238s]
Jan 11 11:21:29.003: INFO: Created: latency-svc-mnwjt
Jan 11 11:21:29.026: INFO: Got endpoints: latency-svc-mnwjt [1.931554777s]
Jan 11 11:21:29.254: INFO: Created: latency-svc-nvdc9
Jan 11 11:21:29.310: INFO: Got endpoints: latency-svc-nvdc9 [2.063026551s]
Jan 11 11:21:29.495: INFO: Created: latency-svc-ljnxl
Jan 11 11:21:29.517: INFO: Got endpoints: latency-svc-ljnxl [2.118750914s]
Jan 11 11:21:29.568: INFO: Created: latency-svc-w5ztk
Jan 11 11:21:29.711: INFO: Got endpoints: latency-svc-w5ztk [2.278892718s]
Jan 11 11:21:29.766: INFO: Created: latency-svc-flhmg
Jan 11 11:21:29.788: INFO: Got endpoints: latency-svc-flhmg [2.128502143s]
Jan 11 11:21:29.899: INFO: Created: latency-svc-bjkjl
Jan 11 11:21:29.946: INFO: Got endpoints: latency-svc-bjkjl [2.278307132s]
Jan 11 11:21:30.123: INFO: Created: latency-svc-c6qcw
Jan 11 11:21:30.139: INFO: Got endpoints: latency-svc-c6qcw [2.333642886s]
Jan 11 11:21:30.202: INFO: Created: latency-svc-qg644
Jan 11 11:21:30.373: INFO: Got endpoints: latency-svc-qg644 [2.327580458s]
Jan 11 11:21:30.419: INFO: Created: latency-svc-f7jsz
Jan 11 11:21:30.426: INFO: Got endpoints: latency-svc-f7jsz [2.240084776s]
Jan 11 11:21:30.607: INFO: Created: latency-svc-vr4bw
Jan 11 11:21:30.674: INFO: Got endpoints: latency-svc-vr4bw [2.429630892s]
Jan 11 11:21:30.774: INFO: Created: latency-svc-z7hht
Jan 11 11:21:30.822: INFO: Got endpoints: latency-svc-z7hht [2.355254947s]
Jan 11 11:21:30.829: INFO: Created: latency-svc-knf2j
Jan 11 11:21:30.840: INFO: Got endpoints: latency-svc-knf2j [2.173888102s]
Jan 11 11:21:30.964: INFO: Created: latency-svc-k4l5t
Jan 11 11:21:31.000: INFO: Got endpoints: latency-svc-k4l5t [2.236286139s]
Jan 11 11:21:31.186: INFO: Created: latency-svc-f2v8k
Jan 11 11:21:31.225: INFO: Got endpoints: latency-svc-f2v8k [2.421441661s]
Jan 11 11:21:31.260: INFO: Created: latency-svc-g8szb
Jan 11 11:21:31.274: INFO: Got endpoints: latency-svc-g8szb [2.307742355s]
Jan 11 11:21:31.398: INFO: Created: latency-svc-bdqxt
Jan 11 11:21:31.425: INFO: Got endpoints: latency-svc-bdqxt [2.398354443s]
Jan 11 11:21:31.589: INFO: Created: latency-svc-9jhhq
Jan 11 11:21:31.610: INFO: Got endpoints: latency-svc-9jhhq [2.29909008s]
Jan 11 11:21:31.978: INFO: Created: latency-svc-hkkcb
Jan 11 11:21:32.307: INFO: Got endpoints: latency-svc-hkkcb [2.789321174s]
Jan 11 11:21:32.334: INFO: Created: latency-svc-x8w9j
Jan 11 11:21:32.371: INFO: Got endpoints: latency-svc-x8w9j [2.65948036s]
Jan 11 11:21:32.538: INFO: Created: latency-svc-8km55
Jan 11 11:21:32.616: INFO: Got endpoints: latency-svc-8km55 [2.828773218s]
Jan 11 11:21:32.728: INFO: Created: latency-svc-kttwj
Jan 11 11:21:32.758: INFO: Got endpoints: latency-svc-kttwj [2.812340219s]
Jan 11 11:21:32.817: INFO: Created: latency-svc-6fr8n
Jan 11 11:21:32.958: INFO: Got endpoints: latency-svc-6fr8n [2.818432445s]
Jan 11 11:21:32.986: INFO: Created: latency-svc-rt94l
Jan 11 11:21:33.025: INFO: Got endpoints: latency-svc-rt94l [2.651442871s]
Jan 11 11:21:33.207: INFO: Created: latency-svc-kr84j
Jan 11 11:21:33.221: INFO: Got endpoints: latency-svc-kr84j [2.794975038s]
Jan 11 11:21:33.397: INFO: Created: latency-svc-zqfj7
Jan 11 11:21:33.422: INFO: Got endpoints: latency-svc-zqfj7 [2.747804893s]
Jan 11 11:21:33.543: INFO: Created: latency-svc-n9cxs
Jan 11 11:21:33.551: INFO: Got endpoints: latency-svc-n9cxs [2.728833207s]
Jan 11 11:21:33.593: INFO: Created: latency-svc-qzj92
Jan 11 11:21:33.782: INFO: Created: latency-svc-v9xh2
Jan 11 11:21:33.782: INFO: Got endpoints: latency-svc-qzj92 [2.942318556s]
Jan 11 11:21:33.979: INFO: Got endpoints: latency-svc-v9xh2 [2.979622699s]
Jan 11 11:21:34.031: INFO: Created: latency-svc-xrmq8
Jan 11 11:21:34.044: INFO: Got endpoints: latency-svc-xrmq8 [2.818976757s]
Jan 11 11:21:34.090: INFO: Created: latency-svc-xdd5x
Jan 11 11:21:34.224: INFO: Got endpoints: latency-svc-xdd5x [2.949458396s]
Jan 11 11:21:34.225: INFO: Created: latency-svc-mbmz8
Jan 11 11:21:34.367: INFO: Got endpoints: latency-svc-mbmz8 [2.941914353s]
Jan 11 11:21:34.400: INFO: Created: latency-svc-75xbg
Jan 11 11:21:34.423: INFO: Got endpoints: latency-svc-75xbg [2.813740505s]
Jan 11 11:21:34.612: INFO: Created: latency-svc-dmbs4
Jan 11 11:21:34.643: INFO: Got endpoints: latency-svc-dmbs4 [2.336259813s]
Jan 11 11:21:34.688: INFO: Created: latency-svc-chts6
Jan 11 11:21:34.707: INFO: Got endpoints: latency-svc-chts6 [2.336050692s]
Jan 11 11:21:34.813: INFO: Created: latency-svc-vc2dq
Jan 11 11:21:34.825: INFO: Got endpoints: latency-svc-vc2dq [2.208456194s]
Jan 11 11:21:35.997: INFO: Created: latency-svc-dvwff
Jan 11 11:21:36.023: INFO: Got endpoints: latency-svc-dvwff [3.264530997s]
Jan 11 11:21:36.156: INFO: Created: latency-svc-r6gq2
Jan 11 11:21:36.178: INFO: Got endpoints: latency-svc-r6gq2 [3.220077969s]
Jan 11 11:21:36.345: INFO: Created: latency-svc-vjt5p
Jan 11 11:21:36.359: INFO: Got endpoints: latency-svc-vjt5p [3.333731564s]
Jan 11 11:21:36.574: INFO: Created: latency-svc-x2jk6
Jan 11 11:21:36.588: INFO: Got endpoints: latency-svc-x2jk6 [3.36725443s]
Jan 11 11:21:36.751: INFO: Created: latency-svc-qvcvk
Jan 11 11:21:36.768: INFO: Got endpoints: latency-svc-qvcvk [3.345828948s]
Jan 11 11:21:36.808: INFO: Created: latency-svc-l46kw
Jan 11 11:21:36.813: INFO: Got endpoints: latency-svc-l46kw [3.262270028s]
Jan 11 11:21:36.945: INFO: Created: latency-svc-tcgf6
Jan 11 11:21:36.973: INFO: Got endpoints: latency-svc-tcgf6 [3.191055752s]
Jan 11 11:21:37.018: INFO: Created: latency-svc-jcnn5
Jan 11 11:21:37.165: INFO: Got endpoints: latency-svc-jcnn5 [3.185310535s]
Jan 11 11:21:37.175: INFO: Created: latency-svc-6h7mn
Jan 11 11:21:37.197: INFO: Got endpoints: latency-svc-6h7mn [3.152541893s]
Jan 11 11:21:37.267: INFO: Created: latency-svc-xshth
Jan 11 11:21:37.471: INFO: Got endpoints: latency-svc-xshth [3.246620422s]
Jan 11 11:21:37.497: INFO: Created: latency-svc-sgjjf
Jan 11 11:21:37.517: INFO: Got endpoints: latency-svc-sgjjf [3.150612263s]
Jan 11 11:21:37.560: INFO: Created: latency-svc-tns5x
Jan 11 11:21:37.645: INFO: Got endpoints: latency-svc-tns5x [3.221826201s]
Jan 11 11:21:37.666: INFO: Created: latency-svc-bx7c7
Jan 11 11:21:37.723: INFO: Created: latency-svc-8kh2l
Jan 11 11:21:37.728: INFO: Got endpoints: latency-svc-bx7c7 [3.084548729s]
Jan 11 11:21:37.834: INFO: Got endpoints: latency-svc-8kh2l [3.127179963s]
Jan 11 11:21:37.863: INFO: Created: latency-svc-fcv4c
Jan 11 11:21:37.869: INFO: Got endpoints: latency-svc-fcv4c [3.043661701s]
Jan 11 11:21:37.941: INFO: Created: latency-svc-56qvr
Jan 11 11:21:38.028: INFO: Got endpoints: latency-svc-56qvr [2.004630667s]
Jan 11 11:21:38.049: INFO: Created: latency-svc-mrfpl
Jan 11 11:21:38.069: INFO: Got endpoints: latency-svc-mrfpl [1.890851525s]
Jan 11 11:21:38.254: INFO: Created: latency-svc-4w6vc
Jan 11 11:21:38.290: INFO: Got endpoints: latency-svc-4w6vc [1.93123972s]
Jan 11 11:21:38.508: INFO: Created: latency-svc-788l7
Jan 11 11:21:38.554: INFO: Got endpoints: latency-svc-788l7 [1.965963998s]
Jan 11 11:21:38.739: INFO: Created: latency-svc-5wszr
Jan 11 11:21:38.763: INFO: Got endpoints: latency-svc-5wszr [1.9951955s]
Jan 11 11:21:38.797: INFO: Created: latency-svc-vzqgk
Jan 11 11:21:38.903: INFO: Got endpoints: latency-svc-vzqgk [2.089023708s]
Jan 11 11:21:38.933: INFO: Created: latency-svc-hjpjd
Jan 11 11:21:38.942: INFO: Got endpoints: latency-svc-hjpjd [1.968009278s]
Jan 11 11:21:39.002: INFO: Created: latency-svc-k8xdg
Jan 11 11:21:39.082: INFO: Got endpoints: latency-svc-k8xdg [1.917125783s]
Jan 11 11:21:39.122: INFO: Created: latency-svc-gj2fg
Jan 11 11:21:39.136: INFO: Got endpoints: latency-svc-gj2fg [1.938635497s]
Jan 11 11:21:39.337: INFO: Created: latency-svc-29wpr
Jan 11 11:21:39.367: INFO: Got endpoints: latency-svc-29wpr [1.896089342s]
Jan 11 11:21:39.449: INFO: Created: latency-svc-r6kdc
Jan 11 11:21:39.546: INFO: Got endpoints: latency-svc-r6kdc [2.028872876s]
Jan 11 11:21:39.583: INFO: Created: latency-svc-pknjc
Jan 11 11:21:39.583: INFO: Got endpoints: latency-svc-pknjc [1.937717293s]
Jan 11 11:21:39.647: INFO: Created: latency-svc-z6pk5
Jan 11 11:21:39.757: INFO: Got endpoints: latency-svc-z6pk5 [2.028939817s]
Jan 11 11:21:39.792: INFO: Created: latency-svc-mt28d
Jan 11 11:21:39.951: INFO: Created: latency-svc-7f6h5
Jan 11 11:21:39.963: INFO: Got endpoints: latency-svc-mt28d [2.128621006s]
Jan 11 11:21:39.976: INFO: Got endpoints: latency-svc-7f6h5 [2.107221748s]
Jan 11 11:21:40.143: INFO: Created: latency-svc-6nbzn
Jan 11 11:21:40.217: INFO: Got endpoints: latency-svc-6nbzn [2.189545598s]
Jan 11 11:21:40.218: INFO: Created: latency-svc-pnjwv
Jan 11 11:21:40.352: INFO: Got endpoints: latency-svc-pnjwv [2.283065758s]
Jan 11 11:21:40.352: INFO: Latencies: [171.388475ms 266.016911ms 391.689274ms 526.327301ms 565.210092ms 709.166115ms 870.766695ms 945.478076ms 1.227282132s 1.463650043s 1.691105569s 1.826127429s 1.865222332s 1.868853595s 1.890851525s 1.896089342s 1.917125783s 1.929102453s 1.93123972s 1.931554777s 1.93600702s 1.937717293s 1.938635497s 1.943114238s 1.946553859s 1.949020516s 1.959272054s 1.960614522s 1.961117658s 1.963700594s 1.965963998s 1.968009278s 1.976713931s 1.984830244s 1.986628848s 1.98769174s 1.9951955s 1.997205684s 2.004630667s 2.028872876s 2.028939817s 2.043991736s 2.060073508s 2.063026551s 2.06905574s 2.074299334s 2.081987776s 2.088972132s 2.089023708s 2.089829034s 2.101145598s 2.105864757s 2.107221748s 2.109785525s 2.118750914s 2.118983785s 2.124507802s 2.124933263s 2.128397593s 2.128502143s 2.128621006s 2.129449838s 2.140777628s 2.173888102s 2.174101549s 2.1753657s 2.184297687s 2.189545598s 2.201623258s 2.208456194s 2.216431556s 2.222995279s 2.22835353s 2.229056759s 2.232124096s 2.236286139s 2.240084776s 2.253410026s 2.255657156s 2.269889549s 2.270489069s 2.273717199s 2.277609686s 2.278307132s 2.278892718s 2.279648633s 2.283065758s 2.28342473s 2.287891262s 2.29909008s 2.300725897s 2.304384014s 2.307742355s 2.311526433s 2.32173626s 2.324184868s 2.327580458s 2.332594942s 2.332641774s 2.333010452s 2.333642886s 2.33374882s 2.334735451s 2.336050692s 2.336259813s 2.338161383s 2.34464781s 2.349294141s 2.354767195s 2.355254947s 2.373165094s 2.398354443s 2.411233039s 2.421441661s 2.42285636s 2.429630892s 2.436765423s 2.444808734s 2.471376236s 2.474690916s 2.479879961s 2.503123597s 2.520984108s 2.536706137s 2.543314408s 2.593036629s 2.618709292s 2.638670765s 2.651442871s 2.658131131s 2.65948036s 2.659838457s 2.677062969s 2.705644637s 2.728833207s 2.742197543s 2.744278937s 2.747804893s 2.748148031s 2.758749015s 2.770817403s 2.780826613s 2.789321174s 2.793339352s 2.7944986s 2.794729602s 2.794975038s 2.807898779s 2.810935144s 2.812340219s 2.813740505s 2.818432445s 2.818976757s 2.828773218s 2.846947164s 2.89730653s 2.898424202s 2.918285032s 2.919294918s 2.931159499s 2.934536233s 2.937108963s 2.941914353s 2.942318556s 2.949458396s 2.96959411s 2.972732014s 2.979622699s 3.008727799s 3.022223404s 3.031854334s 3.036368705s 3.043661701s 3.047299746s 3.084548729s 3.09833765s 3.104031285s 3.127179963s 3.150612263s 3.152541893s 3.170571796s 3.182320623s 3.182763074s 3.185310535s 3.188177765s 3.191055752s 3.213368104s 3.220077969s 3.221826201s 3.237849149s 3.244787234s 3.246620422s 3.262270028s 3.264530997s 3.26623452s 3.28088108s 3.333731564s 3.345828948s 3.3569355s 3.36725443s]
Jan 11 11:21:40.352: INFO: 50 %ile: 2.333642886s
Jan 11 11:21:40.353: INFO: 90 %ile: 3.170571796s
Jan 11 11:21:40.353: INFO: 99 %ile: 3.3569355s
Jan 11 11:21:40.353: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:21:40.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-nj6ct" for this suite.
Jan 11 11:22:56.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:22:56.699: INFO: namespace: e2e-tests-svc-latency-nj6ct, resource: bindings, ignored listing per whitelist
Jan 11 11:22:56.789: INFO: namespace e2e-tests-svc-latency-nj6ct deletion completed in 1m16.41844122s

• [SLOW TEST:119.826 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:22:56.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-b7944e10-3464-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 11 11:22:56.997: INFO: Waiting up to 5m0s for pod "pod-configmaps-b795366f-3464-11ea-b0bd-0242ac110005" in namespace "e2e-tests-configmap-jqljm" to be "success or failure"
Jan 11 11:22:57.021: INFO: Pod "pod-configmaps-b795366f-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.660429ms
Jan 11 11:22:59.040: INFO: Pod "pod-configmaps-b795366f-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043270286s
Jan 11 11:23:01.063: INFO: Pod "pod-configmaps-b795366f-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066195558s
Jan 11 11:23:03.106: INFO: Pod "pod-configmaps-b795366f-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109193128s
Jan 11 11:23:05.117: INFO: Pod "pod-configmaps-b795366f-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119806445s
Jan 11 11:23:07.133: INFO: Pod "pod-configmaps-b795366f-3464-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.13618562s
STEP: Saw pod success
Jan 11 11:23:07.133: INFO: Pod "pod-configmaps-b795366f-3464-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:23:07.140: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b795366f-3464-11ea-b0bd-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 11 11:23:07.201: INFO: Waiting for pod pod-configmaps-b795366f-3464-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:23:07.208: INFO: Pod pod-configmaps-b795366f-3464-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:23:07.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jqljm" for this suite.
Jan 11 11:23:13.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:23:13.396: INFO: namespace: e2e-tests-configmap-jqljm, resource: bindings, ignored listing per whitelist
Jan 11 11:23:13.434: INFO: namespace e2e-tests-configmap-jqljm deletion completed in 6.219456117s

• [SLOW TEST:16.645 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:23:13.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 11:23:13.652: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c17ecb94-3464-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-g589f" to be "success or failure"
Jan 11 11:23:13.675: INFO: Pod "downwardapi-volume-c17ecb94-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.239044ms
Jan 11 11:23:15.694: INFO: Pod "downwardapi-volume-c17ecb94-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041461434s
Jan 11 11:23:17.713: INFO: Pod "downwardapi-volume-c17ecb94-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060471018s
Jan 11 11:23:19.728: INFO: Pod "downwardapi-volume-c17ecb94-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075758131s
Jan 11 11:23:21.746: INFO: Pod "downwardapi-volume-c17ecb94-3464-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094260117s
Jan 11 11:23:23.758: INFO: Pod "downwardapi-volume-c17ecb94-3464-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105911611s
STEP: Saw pod success
Jan 11 11:23:23.758: INFO: Pod "downwardapi-volume-c17ecb94-3464-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:23:23.763: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c17ecb94-3464-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 11:23:24.513: INFO: Waiting for pod downwardapi-volume-c17ecb94-3464-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:23:24.730: INFO: Pod downwardapi-volume-c17ecb94-3464-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:23:24.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-g589f" for this suite.
Jan 11 11:23:30.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:23:31.025: INFO: namespace: e2e-tests-downward-api-g589f, resource: bindings, ignored listing per whitelist
Jan 11 11:23:31.087: INFO: namespace e2e-tests-downward-api-g589f deletion completed in 6.33091451s

• [SLOW TEST:17.653 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:23:31.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 11 11:24:01.516: INFO: Container started at 2020-01-11 11:23:39 +0000 UTC, pod became ready at 2020-01-11 11:24:00 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:24:01.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-6fc95" for this suite.
Jan 11 11:24:25.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:24:25.696: INFO: namespace: e2e-tests-container-probe-6fc95, resource: bindings, ignored listing per whitelist
Jan 11 11:24:25.772: INFO: namespace e2e-tests-container-probe-6fc95 deletion completed in 24.250075311s

• [SLOW TEST:54.683 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:24:25.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan 11 11:24:26.029: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:24:26.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-w4c6r" for this suite.
Jan 11 11:24:32.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:24:32.310: INFO: namespace: e2e-tests-kubectl-w4c6r, resource: bindings, ignored listing per whitelist
Jan 11 11:24:32.363: INFO: namespace e2e-tests-kubectl-w4c6r deletion completed in 6.203266844s

• [SLOW TEST:6.591 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:24:32.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 11 11:24:32.568: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 11 11:24:32.647: INFO: Waiting for terminating namespaces to be deleted...
Jan 11 11:24:32.703: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 11 11:24:32.729: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 11 11:24:32.729: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 11 11:24:32.729: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 11 11:24:32.729: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 11 11:24:32.729: INFO: 	Container coredns ready: true, restart count 0
Jan 11 11:24:32.729: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 11 11:24:32.729: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 11 11:24:32.729: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 11 11:24:32.729: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 11 11:24:32.729: INFO: 	Container weave ready: true, restart count 0
Jan 11 11:24:32.729: INFO: 	Container weave-npc ready: true, restart count 0
Jan 11 11:24:32.729: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 11 11:24:32.729: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e8d15e66f9b98a], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:24:33.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-66slr" for this suite.
Jan 11 11:24:39.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:24:39.999: INFO: namespace: e2e-tests-sched-pred-66slr, resource: bindings, ignored listing per whitelist
Jan 11 11:24:40.159: INFO: namespace e2e-tests-sched-pred-66slr deletion completed in 6.267420323s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.796 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:24:40.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 11 11:24:40.539: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 11 11:24:45.577: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 11 11:24:51.603: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 11 11:24:51.664: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-nn9vf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nn9vf/deployments/test-cleanup-deployment,UID:fbed9b23-3464-11ea-a994-fa163e34d433,ResourceVersion:17917512,Generation:1,CreationTimestamp:2020-01-11 11:24:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 11 11:24:51.675: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:24:51.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-nn9vf" for this suite.
Jan 11 11:24:59.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:25:00.091: INFO: namespace: e2e-tests-deployment-nn9vf, resource: bindings, ignored listing per whitelist
Jan 11 11:25:00.136: INFO: namespace e2e-tests-deployment-nn9vf deletion completed in 8.311062485s

• [SLOW TEST:19.978 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:25:00.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 11 11:25:01.105: INFO: Number of nodes with available pods: 0
Jan 11 11:25:01.105: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:02.132: INFO: Number of nodes with available pods: 0
Jan 11 11:25:02.132: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:03.780: INFO: Number of nodes with available pods: 0
Jan 11 11:25:03.780: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:04.131: INFO: Number of nodes with available pods: 0
Jan 11 11:25:04.131: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:05.222: INFO: Number of nodes with available pods: 0
Jan 11 11:25:05.222: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:06.125: INFO: Number of nodes with available pods: 0
Jan 11 11:25:06.125: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:07.625: INFO: Number of nodes with available pods: 0
Jan 11 11:25:07.625: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:08.280: INFO: Number of nodes with available pods: 0
Jan 11 11:25:08.280: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:09.133: INFO: Number of nodes with available pods: 0
Jan 11 11:25:09.133: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:10.137: INFO: Number of nodes with available pods: 0
Jan 11 11:25:10.137: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:11.136: INFO: Number of nodes with available pods: 1
Jan 11 11:25:11.136: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 11 11:25:11.237: INFO: Number of nodes with available pods: 0
Jan 11 11:25:11.237: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:12.293: INFO: Number of nodes with available pods: 0
Jan 11 11:25:12.293: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:13.257: INFO: Number of nodes with available pods: 0
Jan 11 11:25:13.257: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:14.486: INFO: Number of nodes with available pods: 0
Jan 11 11:25:14.486: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:15.276: INFO: Number of nodes with available pods: 0
Jan 11 11:25:15.276: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:16.268: INFO: Number of nodes with available pods: 0
Jan 11 11:25:16.268: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:17.260: INFO: Number of nodes with available pods: 0
Jan 11 11:25:17.260: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:18.255: INFO: Number of nodes with available pods: 0
Jan 11 11:25:18.255: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:19.350: INFO: Number of nodes with available pods: 0
Jan 11 11:25:19.350: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:20.278: INFO: Number of nodes with available pods: 0
Jan 11 11:25:20.278: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:21.260: INFO: Number of nodes with available pods: 0
Jan 11 11:25:21.260: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:22.275: INFO: Number of nodes with available pods: 0
Jan 11 11:25:22.275: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:23.828: INFO: Number of nodes with available pods: 0
Jan 11 11:25:23.828: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:24.264: INFO: Number of nodes with available pods: 0
Jan 11 11:25:24.264: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:25.793: INFO: Number of nodes with available pods: 0
Jan 11 11:25:25.793: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:26.279: INFO: Number of nodes with available pods: 0
Jan 11 11:25:26.279: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:27.271: INFO: Number of nodes with available pods: 0
Jan 11 11:25:27.271: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 11:25:28.292: INFO: Number of nodes with available pods: 1
Jan 11 11:25:28.292: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-z9x76, will wait for the garbage collector to delete the pods
Jan 11 11:25:28.432: INFO: Deleting DaemonSet.extensions daemon-set took: 61.356752ms
Jan 11 11:25:28.532: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.227463ms
Jan 11 11:25:42.681: INFO: Number of nodes with available pods: 0
Jan 11 11:25:42.681: INFO: Number of running nodes: 0, number of available pods: 0
Jan 11 11:25:42.690: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-z9x76/daemonsets","resourceVersion":"17917688"},"items":null}

Jan 11 11:25:42.694: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-z9x76/pods","resourceVersion":"17917688"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:25:42.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-z9x76" for this suite.
Jan 11 11:25:50.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:25:50.913: INFO: namespace: e2e-tests-daemonsets-z9x76, resource: bindings, ignored listing per whitelist
Jan 11 11:25:50.919: INFO: namespace e2e-tests-daemonsets-z9x76 deletion completed in 8.211984987s

• [SLOW TEST:50.782 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:25:50.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:26:01.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-v9qnm" for this suite.
Jan 11 11:26:55.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:26:55.449: INFO: namespace: e2e-tests-kubelet-test-v9qnm, resource: bindings, ignored listing per whitelist
Jan 11 11:26:55.484: INFO: namespace e2e-tests-kubelet-test-v9qnm deletion completed in 54.227706405s

• [SLOW TEST:64.565 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:26:55.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:27:10.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-kbl6d" for this suite.
Jan 11 11:27:34.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:27:34.922: INFO: namespace: e2e-tests-replication-controller-kbl6d, resource: bindings, ignored listing per whitelist
Jan 11 11:27:35.050: INFO: namespace e2e-tests-replication-controller-kbl6d deletion completed in 24.258136237s

• [SLOW TEST:39.565 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:27:35.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 11 11:27:45.817: INFO: Successfully updated pod "labelsupdate5d6784a8-3465-11ea-b0bd-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:27:47.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cx8rj" for this suite.
Jan 11 11:28:12.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:28:12.232: INFO: namespace: e2e-tests-projected-cx8rj, resource: bindings, ignored listing per whitelist
Jan 11 11:28:12.279: INFO: namespace e2e-tests-projected-cx8rj deletion completed in 24.328559809s

• [SLOW TEST:37.229 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:28:12.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-73b055df-3465-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 11 11:28:12.631: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-73b7879f-3465-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-k7547" to be "success or failure"
Jan 11 11:28:12.640: INFO: Pod "pod-projected-configmaps-73b7879f-3465-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.233018ms
Jan 11 11:28:15.016: INFO: Pod "pod-projected-configmaps-73b7879f-3465-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.385636783s
Jan 11 11:28:17.034: INFO: Pod "pod-projected-configmaps-73b7879f-3465-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402939566s
Jan 11 11:28:19.049: INFO: Pod "pod-projected-configmaps-73b7879f-3465-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418207585s
Jan 11 11:28:21.159: INFO: Pod "pod-projected-configmaps-73b7879f-3465-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528702749s
Jan 11 11:28:23.461: INFO: Pod "pod-projected-configmaps-73b7879f-3465-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.829899092s
STEP: Saw pod success
Jan 11 11:28:23.461: INFO: Pod "pod-projected-configmaps-73b7879f-3465-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:28:23.471: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-73b7879f-3465-11ea-b0bd-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 11 11:28:23.633: INFO: Waiting for pod pod-projected-configmaps-73b7879f-3465-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:28:23.861: INFO: Pod pod-projected-configmaps-73b7879f-3465-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:28:23.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k7547" for this suite.
Jan 11 11:28:29.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:28:29.968: INFO: namespace: e2e-tests-projected-k7547, resource: bindings, ignored listing per whitelist
Jan 11 11:28:30.042: INFO: namespace e2e-tests-projected-k7547 deletion completed in 6.167608349s

• [SLOW TEST:17.763 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:28:30.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 11 11:31:33.013: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:33.109: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:35.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:35.127: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:37.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:37.132: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:39.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:39.132: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:41.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:41.136: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:43.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:43.125: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:45.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:45.122: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:47.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:47.120: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:49.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:49.252: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:51.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:51.143: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:53.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:53.125: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:55.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:55.127: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:57.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:57.127: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:31:59.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:31:59.135: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:01.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:01.125: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:03.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:03.125: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:05.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:05.139: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:07.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:07.131: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:09.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:09.131: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:11.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:11.135: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:13.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:13.122: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:15.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:15.135: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:17.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:17.121: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:19.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:19.163: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:21.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:21.129: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:23.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:23.127: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:25.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:25.125: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:27.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:27.127: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:29.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:29.126: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:31.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:31.133: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:33.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:33.125: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:35.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:35.129: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:37.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:37.129: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:39.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:39.131: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:41.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:41.121: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:43.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:43.142: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:45.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:45.129: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:47.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:47.128: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:49.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:49.134: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:51.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:51.128: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:53.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:53.141: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:55.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:55.214: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:57.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:57.148: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:32:59.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:32:59.132: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:33:01.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:33:01.140: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 11 11:33:03.110: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 11 11:33:03.133: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:33:03.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qxspp" for this suite.
Jan 11 11:33:27.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:33:27.321: INFO: namespace: e2e-tests-container-lifecycle-hook-qxspp, resource: bindings, ignored listing per whitelist
Jan 11 11:33:27.356: INFO: namespace e2e-tests-container-lifecycle-hook-qxspp deletion completed in 24.211893698s

• [SLOW TEST:297.313 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:33:27.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-2f6e1239-3466-11ea-b0bd-0242ac110005
STEP: Creating secret with name s-test-opt-upd-2f6e1282-3466-11ea-b0bd-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-2f6e1239-3466-11ea-b0bd-0242ac110005
STEP: Updating secret s-test-opt-upd-2f6e1282-3466-11ea-b0bd-0242ac110005
STEP: Creating secret with name s-test-opt-create-2f6e1298-3466-11ea-b0bd-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:33:44.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7wvhg" for this suite.
Jan 11 11:34:08.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:34:08.331: INFO: namespace: e2e-tests-secrets-7wvhg, resource: bindings, ignored listing per whitelist
Jan 11 11:34:08.420: INFO: namespace e2e-tests-secrets-7wvhg deletion completed in 24.246649178s

• [SLOW TEST:41.064 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:34:08.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan 11 11:34:08.765: INFO: Waiting up to 5m0s for pod "client-containers-47ff8760-3466-11ea-b0bd-0242ac110005" in namespace "e2e-tests-containers-wpk6l" to be "success or failure"
Jan 11 11:34:08.838: INFO: Pod "client-containers-47ff8760-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 73.782333ms
Jan 11 11:34:10.859: INFO: Pod "client-containers-47ff8760-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094134268s
Jan 11 11:34:12.878: INFO: Pod "client-containers-47ff8760-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113657202s
Jan 11 11:34:14.897: INFO: Pod "client-containers-47ff8760-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132082981s
Jan 11 11:34:16.919: INFO: Pod "client-containers-47ff8760-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154880062s
Jan 11 11:34:18.933: INFO: Pod "client-containers-47ff8760-3466-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.167963278s
STEP: Saw pod success
Jan 11 11:34:18.933: INFO: Pod "client-containers-47ff8760-3466-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:34:18.935: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-47ff8760-3466-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 11:34:19.050: INFO: Waiting for pod client-containers-47ff8760-3466-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:34:19.137: INFO: Pod client-containers-47ff8760-3466-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:34:19.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-wpk6l" for this suite.
Jan 11 11:34:25.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:34:25.290: INFO: namespace: e2e-tests-containers-wpk6l, resource: bindings, ignored listing per whitelist
Jan 11 11:34:25.390: INFO: namespace e2e-tests-containers-wpk6l deletion completed in 6.234994984s

• [SLOW TEST:16.970 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:34:25.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-520a9df5-3466-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 11 11:34:25.749: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-521eae61-3466-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-l6k75" to be "success or failure"
Jan 11 11:34:25.761: INFO: Pod "pod-projected-configmaps-521eae61-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.866668ms
Jan 11 11:34:27.778: INFO: Pod "pod-projected-configmaps-521eae61-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028762403s
Jan 11 11:34:29.798: INFO: Pod "pod-projected-configmaps-521eae61-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048705371s
Jan 11 11:34:32.088: INFO: Pod "pod-projected-configmaps-521eae61-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338647309s
Jan 11 11:34:34.111: INFO: Pod "pod-projected-configmaps-521eae61-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.361319863s
Jan 11 11:34:36.416: INFO: Pod "pod-projected-configmaps-521eae61-3466-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.666246305s
STEP: Saw pod success
Jan 11 11:34:36.416: INFO: Pod "pod-projected-configmaps-521eae61-3466-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:34:36.432: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-521eae61-3466-11ea-b0bd-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 11 11:34:36.717: INFO: Waiting for pod pod-projected-configmaps-521eae61-3466-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:34:36.767: INFO: Pod pod-projected-configmaps-521eae61-3466-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:34:36.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l6k75" for this suite.
Jan 11 11:34:42.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:34:42.903: INFO: namespace: e2e-tests-projected-l6k75, resource: bindings, ignored listing per whitelist
Jan 11 11:34:42.940: INFO: namespace e2e-tests-projected-l6k75 deletion completed in 6.156237802s

• [SLOW TEST:17.550 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:34:42.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-7t2sr
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-7t2sr
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-7t2sr
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-7t2sr
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-7t2sr
Jan 11 11:34:57.575: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-7t2sr, name: ss-0, uid: 65111e3e-3466-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan 11 11:34:57.962: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-7t2sr, name: ss-0, uid: 65111e3e-3466-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 11 11:34:58.036: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-7t2sr, name: ss-0, uid: 65111e3e-3466-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 11 11:34:58.048: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-7t2sr
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-7t2sr
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-7t2sr and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 11 11:35:11.118: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7t2sr
Jan 11 11:35:11.221: INFO: Scaling statefulset ss to 0
Jan 11 11:35:21.553: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 11:35:21.560: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:35:21.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-7t2sr" for this suite.
Jan 11 11:35:29.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:35:29.856: INFO: namespace: e2e-tests-statefulset-7t2sr, resource: bindings, ignored listing per whitelist
Jan 11 11:35:29.919: INFO: namespace e2e-tests-statefulset-7t2sr deletion completed in 8.20116282s

• [SLOW TEST:46.979 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:35:29.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 11:35:30.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78848f71-3466-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-t6m8h" to be "success or failure"
Jan 11 11:35:30.223: INFO: Pod "downwardapi-volume-78848f71-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.472702ms
Jan 11 11:35:32.565: INFO: Pod "downwardapi-volume-78848f71-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.352351223s
Jan 11 11:35:34.577: INFO: Pod "downwardapi-volume-78848f71-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364586043s
Jan 11 11:35:36.930: INFO: Pod "downwardapi-volume-78848f71-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.717102526s
Jan 11 11:35:38.945: INFO: Pod "downwardapi-volume-78848f71-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.73252882s
Jan 11 11:35:41.236: INFO: Pod "downwardapi-volume-78848f71-3466-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.023380663s
STEP: Saw pod success
Jan 11 11:35:41.236: INFO: Pod "downwardapi-volume-78848f71-3466-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:35:41.251: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-78848f71-3466-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 11:35:41.877: INFO: Waiting for pod downwardapi-volume-78848f71-3466-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:35:41.898: INFO: Pod downwardapi-volume-78848f71-3466-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:35:41.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-t6m8h" for this suite.
Jan 11 11:35:48.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:35:48.097: INFO: namespace: e2e-tests-downward-api-t6m8h, resource: bindings, ignored listing per whitelist
Jan 11 11:35:48.206: INFO: namespace e2e-tests-downward-api-t6m8h deletion completed in 6.277849771s

• [SLOW TEST:18.286 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:35:48.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:35:48.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7jd4w" for this suite.
Jan 11 11:36:12.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:36:12.961: INFO: namespace: e2e-tests-pods-7jd4w, resource: bindings, ignored listing per whitelist
Jan 11 11:36:13.048: INFO: namespace e2e-tests-pods-7jd4w deletion completed in 24.273398736s

• [SLOW TEST:24.842 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:36:13.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 11:36:13.349: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9242b965-3466-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-w4kvn" to be "success or failure"
Jan 11 11:36:13.376: INFO: Pod "downwardapi-volume-9242b965-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.080528ms
Jan 11 11:36:15.398: INFO: Pod "downwardapi-volume-9242b965-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049176558s
Jan 11 11:36:17.457: INFO: Pod "downwardapi-volume-9242b965-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108317429s
Jan 11 11:36:20.053: INFO: Pod "downwardapi-volume-9242b965-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.704364666s
Jan 11 11:36:22.082: INFO: Pod "downwardapi-volume-9242b965-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.733096172s
Jan 11 11:36:24.101: INFO: Pod "downwardapi-volume-9242b965-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.75260384s
Jan 11 11:36:26.204: INFO: Pod "downwardapi-volume-9242b965-3466-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.855415246s
STEP: Saw pod success
Jan 11 11:36:26.204: INFO: Pod "downwardapi-volume-9242b965-3466-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:36:26.215: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9242b965-3466-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 11:36:26.786: INFO: Waiting for pod downwardapi-volume-9242b965-3466-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:36:26.797: INFO: Pod downwardapi-volume-9242b965-3466-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:36:26.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-w4kvn" for this suite.
Jan 11 11:36:32.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:36:32.992: INFO: namespace: e2e-tests-downward-api-w4kvn, resource: bindings, ignored listing per whitelist
Jan 11 11:36:33.087: INFO: namespace e2e-tests-downward-api-w4kvn deletion completed in 6.278132299s

• [SLOW TEST:20.039 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:36:33.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 11 11:36:33.421: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-swpps,SelfLink:/api/v1/namespaces/e2e-tests-watch-swpps/configmaps/e2e-watch-test-resource-version,UID:9e1cd4b6-3466-11ea-a994-fa163e34d433,ResourceVersion:17918968,Generation:0,CreationTimestamp:2020-01-11 11:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 11 11:36:33.421: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-swpps,SelfLink:/api/v1/namespaces/e2e-tests-watch-swpps/configmaps/e2e-watch-test-resource-version,UID:9e1cd4b6-3466-11ea-a994-fa163e34d433,ResourceVersion:17918969,Generation:0,CreationTimestamp:2020-01-11 11:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:36:33.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-swpps" for this suite.
Jan 11 11:36:39.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:36:39.576: INFO: namespace: e2e-tests-watch-swpps, resource: bindings, ignored listing per whitelist
Jan 11 11:36:39.620: INFO: namespace e2e-tests-watch-swpps deletion completed in 6.19057663s

• [SLOW TEST:6.533 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:36:39.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 11:36:39.798: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a206302d-3466-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-99pzk" to be "success or failure"
Jan 11 11:36:39.832: INFO: Pod "downwardapi-volume-a206302d-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.435165ms
Jan 11 11:36:41.856: INFO: Pod "downwardapi-volume-a206302d-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057797229s
Jan 11 11:36:43.917: INFO: Pod "downwardapi-volume-a206302d-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11910516s
Jan 11 11:36:46.145: INFO: Pod "downwardapi-volume-a206302d-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.347470117s
Jan 11 11:36:48.168: INFO: Pod "downwardapi-volume-a206302d-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.370414482s
Jan 11 11:36:50.188: INFO: Pod "downwardapi-volume-a206302d-3466-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.390566098s
Jan 11 11:36:52.256: INFO: Pod "downwardapi-volume-a206302d-3466-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.458307941s
STEP: Saw pod success
Jan 11 11:36:52.256: INFO: Pod "downwardapi-volume-a206302d-3466-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:36:52.266: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a206302d-3466-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 11:36:52.691: INFO: Waiting for pod downwardapi-volume-a206302d-3466-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:36:52.697: INFO: Pod downwardapi-volume-a206302d-3466-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:36:52.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-99pzk" for this suite.
Jan 11 11:36:58.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:36:58.936: INFO: namespace: e2e-tests-projected-99pzk, resource: bindings, ignored listing per whitelist
Jan 11 11:36:59.028: INFO: namespace e2e-tests-projected-99pzk deletion completed in 6.321160021s

• [SLOW TEST:19.407 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:36:59.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:36:59.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-fdhp9" for this suite.
Jan 11 11:37:05.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:37:05.440: INFO: namespace: e2e-tests-services-fdhp9, resource: bindings, ignored listing per whitelist
Jan 11 11:37:05.440: INFO: namespace e2e-tests-services-fdhp9 deletion completed in 6.143290352s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.412 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:37:05.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 11 11:37:05.641: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 11 11:37:10.655: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 11 11:37:16.706: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 11 11:37:18.720: INFO: Creating deployment "test-rollover-deployment"
Jan 11 11:37:18.745: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 11 11:37:20.755: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 11 11:37:20.762: INFO: Ensure that both replica sets have 1 created replica
Jan 11 11:37:20.767: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 11 11:37:20.778: INFO: Updating deployment test-rollover-deployment
Jan 11 11:37:20.778: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 11 11:37:24.208: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 11 11:37:24.245: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 11 11:37:24.264: INFO: all replica sets need to contain the pod-template-hash label
Jan 11 11:37:24.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339442, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 11:37:26.327: INFO: all replica sets need to contain the pod-template-hash label
Jan 11 11:37:26.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339442, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 11:37:29.930: INFO: all replica sets need to contain the pod-template-hash label
Jan 11 11:37:29.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339442, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 11:37:30.414: INFO: all replica sets need to contain the pod-template-hash label
Jan 11 11:37:30.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339442, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 11:37:32.382: INFO: all replica sets need to contain the pod-template-hash label
Jan 11 11:37:32.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339442, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 11:37:34.282: INFO: all replica sets need to contain the pod-template-hash label
Jan 11 11:37:34.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339453, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 11:37:36.304: INFO: all replica sets need to contain the pod-template-hash label
Jan 11 11:37:36.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339453, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 11:37:38.286: INFO: all replica sets need to contain the pod-template-hash label
Jan 11 11:37:38.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339453, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 11:37:40.283: INFO: all replica sets need to contain the pod-template-hash label
Jan 11 11:37:40.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339453, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 11:37:42.321: INFO: all replica sets need to contain the pod-template-hash label
Jan 11 11:37:42.321: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339453, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714339438, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 11:37:44.283: INFO: 
Jan 11 11:37:44.283: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 11 11:37:44.304: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-qfr9v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qfr9v/deployments/test-rollover-deployment,UID:b93ce79a-3466-11ea-a994-fa163e34d433,ResourceVersion:17919158,Generation:2,CreationTimestamp:2020-01-11 11:37:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-11 11:37:18 +0000 UTC 2020-01-11 11:37:18 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-11 11:37:44 +0000 UTC 2020-01-11 11:37:18 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 11 11:37:44.322: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-qfr9v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qfr9v/replicasets/test-rollover-deployment-5b8479fdb6,UID:ba772f0d-3466-11ea-a994-fa163e34d433,ResourceVersion:17919149,Generation:2,CreationTimestamp:2020-01-11 11:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b93ce79a-3466-11ea-a994-fa163e34d433 0xc001381f27 0xc001381f28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 11 11:37:44.322: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 11 11:37:44.322: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-qfr9v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qfr9v/replicasets/test-rollover-controller,UID:b16744f6-3466-11ea-a994-fa163e34d433,ResourceVersion:17919157,Generation:2,CreationTimestamp:2020-01-11 11:37:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b93ce79a-3466-11ea-a994-fa163e34d433 0xc001381d97 0xc001381d98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 11 11:37:44.323: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-qfr9v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qfr9v/replicasets/test-rollover-deployment-58494b7559,UID:b94433b8-3466-11ea-a994-fa163e34d433,ResourceVersion:17919113,Generation:2,CreationTimestamp:2020-01-11 11:37:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b93ce79a-3466-11ea-a994-fa163e34d433 0xc001381e57 0xc001381e58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 11 11:37:44.413: INFO: Pod "test-rollover-deployment-5b8479fdb6-qlkt6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-qlkt6,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-qfr9v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qfr9v/pods/test-rollover-deployment-5b8479fdb6-qlkt6,UID:bb131ef2-3466-11ea-a994-fa163e34d433,ResourceVersion:17919133,Generation:0,CreationTimestamp:2020-01-11 11:37:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 ba772f0d-3466-11ea-a994-fa163e34d433 0xc000970a17 0xc000970a18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-tr28x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tr28x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-tr28x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000970b10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000970c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 11:37:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 11:37:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 11:37:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 11:37:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-11 11:37:22 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-11 11:37:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://eac09bc233dfa5ff2861fe0d813d4a6d8bb36260f8f35d5d6218daec989e11ed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:37:44.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-qfr9v" for this suite.
Jan 11 11:37:52.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:37:52.559: INFO: namespace: e2e-tests-deployment-qfr9v, resource: bindings, ignored listing per whitelist
Jan 11 11:37:52.645: INFO: namespace e2e-tests-deployment-qfr9v deletion completed in 8.225446352s

• [SLOW TEST:47.205 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:37:52.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jdtqt
Jan 11 11:38:02.280: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jdtqt
STEP: checking the pod's current state and verifying that restartCount is present
Jan 11 11:38:02.284: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:42:02.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-jdtqt" for this suite.
Jan 11 11:42:09.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:42:09.137: INFO: namespace: e2e-tests-container-probe-jdtqt, resource: bindings, ignored listing per whitelist
Jan 11 11:42:09.168: INFO: namespace e2e-tests-container-probe-jdtqt deletion completed in 6.272884892s

• [SLOW TEST:256.522 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:42:09.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-6678061d-3467-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 11 11:42:09.382: INFO: Waiting up to 5m0s for pod "pod-secrets-6678d65c-3467-11ea-b0bd-0242ac110005" in namespace "e2e-tests-secrets-bc424" to be "success or failure"
Jan 11 11:42:09.397: INFO: Pod "pod-secrets-6678d65c-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.560818ms
Jan 11 11:42:11.491: INFO: Pod "pod-secrets-6678d65c-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109412114s
Jan 11 11:42:13.523: INFO: Pod "pod-secrets-6678d65c-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141054801s
Jan 11 11:42:15.677: INFO: Pod "pod-secrets-6678d65c-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295183656s
Jan 11 11:42:17.871: INFO: Pod "pod-secrets-6678d65c-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.488799362s
Jan 11 11:42:19.965: INFO: Pod "pod-secrets-6678d65c-3467-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.58256973s
STEP: Saw pod success
Jan 11 11:42:19.965: INFO: Pod "pod-secrets-6678d65c-3467-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:42:19.971: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6678d65c-3467-11ea-b0bd-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 11 11:42:20.304: INFO: Waiting for pod pod-secrets-6678d65c-3467-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:42:20.310: INFO: Pod pod-secrets-6678d65c-3467-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:42:20.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bc424" for this suite.
Jan 11 11:42:26.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:42:26.579: INFO: namespace: e2e-tests-secrets-bc424, resource: bindings, ignored listing per whitelist
Jan 11 11:42:26.766: INFO: namespace e2e-tests-secrets-bc424 deletion completed in 6.399619902s

• [SLOW TEST:17.598 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:42:26.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 11 11:42:26.905: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:42:49.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-79nk5" for this suite.
Jan 11 11:43:30.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:43:30.139: INFO: namespace: e2e-tests-init-container-79nk5, resource: bindings, ignored listing per whitelist
Jan 11 11:43:30.292: INFO: namespace e2e-tests-init-container-79nk5 deletion completed in 40.264377764s

• [SLOW TEST:63.526 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:43:30.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 11 11:43:30.763: INFO: Waiting up to 5m0s for pod "pod-96ee2b7b-3467-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-zspxw" to be "success or failure"
Jan 11 11:43:30.795: INFO: Pod "pod-96ee2b7b-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.131691ms
Jan 11 11:43:32.805: INFO: Pod "pod-96ee2b7b-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041928791s
Jan 11 11:43:34.828: INFO: Pod "pod-96ee2b7b-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064268396s
Jan 11 11:43:37.322: INFO: Pod "pod-96ee2b7b-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.55842399s
Jan 11 11:43:39.338: INFO: Pod "pod-96ee2b7b-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.574182086s
Jan 11 11:43:41.457: INFO: Pod "pod-96ee2b7b-3467-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.693143034s
STEP: Saw pod success
Jan 11 11:43:41.457: INFO: Pod "pod-96ee2b7b-3467-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:43:41.468: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-96ee2b7b-3467-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 11:43:41.988: INFO: Waiting for pod pod-96ee2b7b-3467-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:43:42.000: INFO: Pod pod-96ee2b7b-3467-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:43:42.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zspxw" for this suite.
Jan 11 11:43:48.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:43:48.400: INFO: namespace: e2e-tests-emptydir-zspxw, resource: bindings, ignored listing per whitelist
Jan 11 11:43:48.415: INFO: namespace e2e-tests-emptydir-zspxw deletion completed in 6.372193777s

• [SLOW TEST:18.122 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:43:48.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 11 11:43:57.558: INFO: Successfully updated pod "annotationupdatea1b1dff3-3467-11ea-b0bd-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:43:59.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-frxzn" for this suite.
Jan 11 11:44:23.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:44:24.028: INFO: namespace: e2e-tests-downward-api-frxzn, resource: bindings, ignored listing per whitelist
Jan 11 11:44:24.036: INFO: namespace e2e-tests-downward-api-frxzn deletion completed in 24.233119967s

• [SLOW TEST:35.621 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:44:24.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 11 11:44:24.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:26.338: INFO: stderr: ""
Jan 11 11:44:26.339: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 11 11:44:26.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:26.662: INFO: stderr: ""
Jan 11 11:44:26.663: INFO: stdout: "update-demo-nautilus-h5dnl update-demo-nautilus-kbzmz "
Jan 11 11:44:26.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h5dnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:26.838: INFO: stderr: ""
Jan 11 11:44:26.838: INFO: stdout: ""
Jan 11 11:44:26.838: INFO: update-demo-nautilus-h5dnl is created but not running
Jan 11 11:44:31.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:31.974: INFO: stderr: ""
Jan 11 11:44:31.975: INFO: stdout: "update-demo-nautilus-h5dnl update-demo-nautilus-kbzmz "
Jan 11 11:44:31.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h5dnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:32.097: INFO: stderr: ""
Jan 11 11:44:32.097: INFO: stdout: ""
Jan 11 11:44:32.097: INFO: update-demo-nautilus-h5dnl is created but not running
Jan 11 11:44:37.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:37.252: INFO: stderr: ""
Jan 11 11:44:37.252: INFO: stdout: "update-demo-nautilus-h5dnl update-demo-nautilus-kbzmz "
Jan 11 11:44:37.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h5dnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:37.394: INFO: stderr: ""
Jan 11 11:44:37.394: INFO: stdout: ""
Jan 11 11:44:37.394: INFO: update-demo-nautilus-h5dnl is created but not running
Jan 11 11:44:42.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:42.585: INFO: stderr: ""
Jan 11 11:44:42.585: INFO: stdout: "update-demo-nautilus-h5dnl update-demo-nautilus-kbzmz "
Jan 11 11:44:42.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h5dnl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:42.727: INFO: stderr: ""
Jan 11 11:44:42.727: INFO: stdout: "true"
Jan 11 11:44:42.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h5dnl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:42.837: INFO: stderr: ""
Jan 11 11:44:42.838: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 11 11:44:42.838: INFO: validating pod update-demo-nautilus-h5dnl
Jan 11 11:44:42.873: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 11 11:44:42.873: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 11 11:44:42.873: INFO: update-demo-nautilus-h5dnl is verified up and running
Jan 11 11:44:42.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbzmz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:43.039: INFO: stderr: ""
Jan 11 11:44:43.039: INFO: stdout: "true"
Jan 11 11:44:43.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbzmz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:43.213: INFO: stderr: ""
Jan 11 11:44:43.213: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 11 11:44:43.213: INFO: validating pod update-demo-nautilus-kbzmz
Jan 11 11:44:43.231: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 11 11:44:43.231: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 11 11:44:43.231: INFO: update-demo-nautilus-kbzmz is verified up and running
STEP: using delete to clean up resources
Jan 11 11:44:43.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:43.339: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 11 11:44:43.339: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 11 11:44:43.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-9hxhc'
Jan 11 11:44:43.481: INFO: stderr: "No resources found.\n"
Jan 11 11:44:43.481: INFO: stdout: ""
Jan 11 11:44:43.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-9hxhc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 11 11:44:43.638: INFO: stderr: ""
Jan 11 11:44:43.639: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:44:43.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9hxhc" for this suite.
Jan 11 11:45:07.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:45:07.835: INFO: namespace: e2e-tests-kubectl-9hxhc, resource: bindings, ignored listing per whitelist
Jan 11 11:45:07.881: INFO: namespace e2e-tests-kubectl-9hxhc deletion completed in 24.225478778s

• [SLOW TEST:43.845 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:45:07.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 11 11:45:08.071: INFO: Waiting up to 5m0s for pod "pod-d0f8839f-3467-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-mwqf9" to be "success or failure"
Jan 11 11:45:08.088: INFO: Pod "pod-d0f8839f-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.924865ms
Jan 11 11:45:10.232: INFO: Pod "pod-d0f8839f-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160527517s
Jan 11 11:45:12.250: INFO: Pod "pod-d0f8839f-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178079541s
Jan 11 11:45:14.328: INFO: Pod "pod-d0f8839f-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256974795s
Jan 11 11:45:16.687: INFO: Pod "pod-d0f8839f-3467-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.61586957s
Jan 11 11:45:18.705: INFO: Pod "pod-d0f8839f-3467-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.633394223s
STEP: Saw pod success
Jan 11 11:45:18.705: INFO: Pod "pod-d0f8839f-3467-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:45:18.722: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d0f8839f-3467-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 11:45:19.065: INFO: Waiting for pod pod-d0f8839f-3467-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:45:19.076: INFO: Pod pod-d0f8839f-3467-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:45:19.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mwqf9" for this suite.
Jan 11 11:45:25.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:45:25.317: INFO: namespace: e2e-tests-emptydir-mwqf9, resource: bindings, ignored listing per whitelist
Jan 11 11:45:25.398: INFO: namespace e2e-tests-emptydir-mwqf9 deletion completed in 6.304646278s

• [SLOW TEST:17.517 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:45:25.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 11 11:45:35.764: INFO: Pod pod-hostip-db6e4e38-3467-11ea-b0bd-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:45:35.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5j9p7" for this suite.
Jan 11 11:45:59.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:45:59.888: INFO: namespace: e2e-tests-pods-5j9p7, resource: bindings, ignored listing per whitelist
Jan 11 11:45:59.984: INFO: namespace e2e-tests-pods-5j9p7 deletion completed in 24.213192206s

• [SLOW TEST:34.586 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:45:59.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 11 11:46:18.775: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:18.783: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:20.783: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:20.852: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:22.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:22.798: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:24.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:24.801: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:26.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:26.816: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:28.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:28.814: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:30.783: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:30.804: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:32.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:32.803: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:34.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:34.810: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:36.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:37.359: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:38.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:38.800: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:40.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:40.802: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:42.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:42.803: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:44.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:44.800: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:46.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:46.796: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:48.783: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:48.796: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:50.783: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:50.818: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 11 11:46:52.784: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 11 11:46:52.790: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:46:52.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-c4wmv" for this suite.
Jan 11 11:47:16.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:47:17.404: INFO: namespace: e2e-tests-container-lifecycle-hook-c4wmv, resource: bindings, ignored listing per whitelist
Jan 11 11:47:17.404: INFO: namespace e2e-tests-container-lifecycle-hook-c4wmv deletion completed in 24.541268872s

• [SLOW TEST:77.421 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:47:17.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-gsvk
STEP: Creating a pod to test atomic-volume-subpath
Jan 11 11:47:18.220: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gsvk" in namespace "e2e-tests-subpath-wmrbj" to be "success or failure"
Jan 11 11:47:18.432: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Pending", Reason="", readiness=false. Elapsed: 212.323971ms
Jan 11 11:47:20.499: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2784013s
Jan 11 11:47:22.564: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343472493s
Jan 11 11:47:24.803: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.582670368s
Jan 11 11:47:26.983: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.763084167s
Jan 11 11:47:29.026: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.806279534s
Jan 11 11:47:31.039: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.818818567s
Jan 11 11:47:33.046: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.82540471s
Jan 11 11:47:35.064: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Running", Reason="", readiness=false. Elapsed: 16.843500292s
Jan 11 11:47:37.082: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Running", Reason="", readiness=false. Elapsed: 18.861636183s
Jan 11 11:47:39.102: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Running", Reason="", readiness=false. Elapsed: 20.881653169s
Jan 11 11:47:41.120: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Running", Reason="", readiness=false. Elapsed: 22.899365357s
Jan 11 11:47:43.136: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Running", Reason="", readiness=false. Elapsed: 24.915381431s
Jan 11 11:47:45.151: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Running", Reason="", readiness=false. Elapsed: 26.930825443s
Jan 11 11:47:47.163: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Running", Reason="", readiness=false. Elapsed: 28.942570477s
Jan 11 11:47:49.174: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Running", Reason="", readiness=false. Elapsed: 30.954093787s
Jan 11 11:47:51.199: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Running", Reason="", readiness=false. Elapsed: 32.978883874s
Jan 11 11:47:53.667: INFO: Pod "pod-subpath-test-configmap-gsvk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.446684492s
STEP: Saw pod success
Jan 11 11:47:53.667: INFO: Pod "pod-subpath-test-configmap-gsvk" satisfied condition "success or failure"
Jan 11 11:47:53.706: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-gsvk container test-container-subpath-configmap-gsvk: 
STEP: delete the pod
Jan 11 11:47:53.981: INFO: Waiting for pod pod-subpath-test-configmap-gsvk to disappear
Jan 11 11:47:54.052: INFO: Pod pod-subpath-test-configmap-gsvk no longer exists
STEP: Deleting pod pod-subpath-test-configmap-gsvk
Jan 11 11:47:54.052: INFO: Deleting pod "pod-subpath-test-configmap-gsvk" in namespace "e2e-tests-subpath-wmrbj"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:47:54.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-wmrbj" for this suite.
Jan 11 11:48:02.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:48:02.351: INFO: namespace: e2e-tests-subpath-wmrbj, resource: bindings, ignored listing per whitelist
Jan 11 11:48:02.417: INFO: namespace e2e-tests-subpath-wmrbj deletion completed in 8.348624033s

• [SLOW TEST:45.012 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:48:02.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 11 11:48:02.909: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-m2n8r,SelfLink:/api/v1/namespaces/e2e-tests-watch-m2n8r/configmaps/e2e-watch-test-watch-closed,UID:392780ed-3468-11ea-a994-fa163e34d433,ResourceVersion:17920242,Generation:0,CreationTimestamp:2020-01-11 11:48:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 11 11:48:02.909: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-m2n8r,SelfLink:/api/v1/namespaces/e2e-tests-watch-m2n8r/configmaps/e2e-watch-test-watch-closed,UID:392780ed-3468-11ea-a994-fa163e34d433,ResourceVersion:17920243,Generation:0,CreationTimestamp:2020-01-11 11:48:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 11 11:48:02.947: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-m2n8r,SelfLink:/api/v1/namespaces/e2e-tests-watch-m2n8r/configmaps/e2e-watch-test-watch-closed,UID:392780ed-3468-11ea-a994-fa163e34d433,ResourceVersion:17920244,Generation:0,CreationTimestamp:2020-01-11 11:48:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 11 11:48:02.947: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-m2n8r,SelfLink:/api/v1/namespaces/e2e-tests-watch-m2n8r/configmaps/e2e-watch-test-watch-closed,UID:392780ed-3468-11ea-a994-fa163e34d433,ResourceVersion:17920245,Generation:0,CreationTimestamp:2020-01-11 11:48:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:48:02.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-m2n8r" for this suite.
Jan 11 11:48:09.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:48:09.095: INFO: namespace: e2e-tests-watch-m2n8r, resource: bindings, ignored listing per whitelist
Jan 11 11:48:09.201: INFO: namespace e2e-tests-watch-m2n8r deletion completed in 6.236189627s

• [SLOW TEST:6.784 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:48:09.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-fl2p
STEP: Creating a pod to test atomic-volume-subpath
Jan 11 11:48:09.435: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-fl2p" in namespace "e2e-tests-subpath-vt48r" to be "success or failure"
Jan 11 11:48:09.438: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Pending", Reason="", readiness=false. Elapsed: 3.725705ms
Jan 11 11:48:11.460: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025767958s
Jan 11 11:48:13.524: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089339875s
Jan 11 11:48:16.088: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.653260709s
Jan 11 11:48:18.103: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.668304569s
Jan 11 11:48:20.118: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.68299039s
Jan 11 11:48:22.436: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Pending", Reason="", readiness=false. Elapsed: 13.001334635s
Jan 11 11:48:24.650: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Pending", Reason="", readiness=false. Elapsed: 15.215719064s
Jan 11 11:48:26.715: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Running", Reason="", readiness=false. Elapsed: 17.280457216s
Jan 11 11:48:28.724: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Running", Reason="", readiness=false. Elapsed: 19.289312361s
Jan 11 11:48:30.733: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Running", Reason="", readiness=false. Elapsed: 21.298821663s
Jan 11 11:48:32.746: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Running", Reason="", readiness=false. Elapsed: 23.31103427s
Jan 11 11:48:34.763: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Running", Reason="", readiness=false. Elapsed: 25.32814284s
Jan 11 11:48:36.776: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Running", Reason="", readiness=false. Elapsed: 27.341191387s
Jan 11 11:48:38.786: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Running", Reason="", readiness=false. Elapsed: 29.35106258s
Jan 11 11:48:40.806: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Running", Reason="", readiness=false. Elapsed: 31.371677034s
Jan 11 11:48:42.845: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Running", Reason="", readiness=false. Elapsed: 33.410754647s
Jan 11 11:48:44.911: INFO: Pod "pod-subpath-test-secret-fl2p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.47658261s
STEP: Saw pod success
Jan 11 11:48:44.911: INFO: Pod "pod-subpath-test-secret-fl2p" satisfied condition "success or failure"
Jan 11 11:48:44.951: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-fl2p container test-container-subpath-secret-fl2p: 
STEP: delete the pod
Jan 11 11:48:45.077: INFO: Waiting for pod pod-subpath-test-secret-fl2p to disappear
Jan 11 11:48:45.101: INFO: Pod pod-subpath-test-secret-fl2p no longer exists
STEP: Deleting pod pod-subpath-test-secret-fl2p
Jan 11 11:48:45.101: INFO: Deleting pod "pod-subpath-test-secret-fl2p" in namespace "e2e-tests-subpath-vt48r"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:48:45.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-vt48r" for this suite.
Jan 11 11:48:51.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:48:51.451: INFO: namespace: e2e-tests-subpath-vt48r, resource: bindings, ignored listing per whitelist
Jan 11 11:48:51.695: INFO: namespace e2e-tests-subpath-vt48r deletion completed in 6.576581059s

• [SLOW TEST:42.494 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:48:51.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 11:48:52.034: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5678169a-3468-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-sjzdn" to be "success or failure"
Jan 11 11:48:52.140: INFO: Pod "downwardapi-volume-5678169a-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 104.159298ms
Jan 11 11:48:54.167: INFO: Pod "downwardapi-volume-5678169a-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131164958s
Jan 11 11:48:56.194: INFO: Pod "downwardapi-volume-5678169a-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157748004s
Jan 11 11:48:58.208: INFO: Pod "downwardapi-volume-5678169a-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172002379s
Jan 11 11:49:00.416: INFO: Pod "downwardapi-volume-5678169a-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.38049167s
Jan 11 11:49:02.823: INFO: Pod "downwardapi-volume-5678169a-3468-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.787376005s
STEP: Saw pod success
Jan 11 11:49:02.823: INFO: Pod "downwardapi-volume-5678169a-3468-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:49:02.832: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5678169a-3468-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 11:49:03.241: INFO: Waiting for pod downwardapi-volume-5678169a-3468-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:49:03.258: INFO: Pod downwardapi-volume-5678169a-3468-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:49:03.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sjzdn" for this suite.
Jan 11 11:49:09.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:49:09.647: INFO: namespace: e2e-tests-downward-api-sjzdn, resource: bindings, ignored listing per whitelist
Jan 11 11:49:09.669: INFO: namespace e2e-tests-downward-api-sjzdn deletion completed in 6.385470791s

• [SLOW TEST:17.974 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:49:09.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 11 11:49:10.038: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"61181115-3468-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001d6a602), BlockOwnerDeletion:(*bool)(0xc001d6a603)}}
Jan 11 11:49:10.098: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6114da75-3468-11ea-a994-fa163e34d433", Controller:(*bool)(0xc000545ed2), BlockOwnerDeletion:(*bool)(0xc000545ed3)}}
Jan 11 11:49:10.175: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6115f947-3468-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001629ea2), BlockOwnerDeletion:(*bool)(0xc001629ea3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:49:15.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-5vsc4" for this suite.
Jan 11 11:49:21.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:49:21.439: INFO: namespace: e2e-tests-gc-5vsc4, resource: bindings, ignored listing per whitelist
Jan 11 11:49:21.501: INFO: namespace e2e-tests-gc-5vsc4 deletion completed in 6.25494684s

• [SLOW TEST:11.831 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:49:21.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-2hlnh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2hlnh to expose endpoints map[]
Jan 11 11:49:21.757: INFO: Get endpoints failed (19.243908ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 11 11:49:22.770: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2hlnh exposes endpoints map[] (1.031467723s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-2hlnh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2hlnh to expose endpoints map[pod1:[80]]
Jan 11 11:49:28.124: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.339776942s elapsed, will retry)
Jan 11 11:49:31.225: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2hlnh exposes endpoints map[pod1:[80]] (8.441371076s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-2hlnh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2hlnh to expose endpoints map[pod2:[80] pod1:[80]]
Jan 11 11:49:35.771: INFO: Unexpected endpoints: found map[68cd98b0-3468-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.535485044s elapsed, will retry)
Jan 11 11:49:40.255: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2hlnh exposes endpoints map[pod1:[80] pod2:[80]] (9.019259875s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-2hlnh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2hlnh to expose endpoints map[pod2:[80]]
Jan 11 11:49:41.308: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2hlnh exposes endpoints map[pod2:[80]] (1.042471206s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-2hlnh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2hlnh to expose endpoints map[]
Jan 11 11:49:42.392: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2hlnh exposes endpoints map[] (1.065431951s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:49:42.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-2hlnh" for this suite.
Jan 11 11:50:06.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:50:06.865: INFO: namespace: e2e-tests-services-2hlnh, resource: bindings, ignored listing per whitelist
Jan 11 11:50:06.880: INFO: namespace e2e-tests-services-2hlnh deletion completed in 24.309607039s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:45.379 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:50:06.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-rh42f/configmap-test-8337ba8e-3468-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 11 11:50:07.141: INFO: Waiting up to 5m0s for pod "pod-configmaps-8339b248-3468-11ea-b0bd-0242ac110005" in namespace "e2e-tests-configmap-rh42f" to be "success or failure"
Jan 11 11:50:07.186: INFO: Pod "pod-configmaps-8339b248-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.47595ms
Jan 11 11:50:09.199: INFO: Pod "pod-configmaps-8339b248-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05879168s
Jan 11 11:50:11.209: INFO: Pod "pod-configmaps-8339b248-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068832913s
Jan 11 11:50:13.279: INFO: Pod "pod-configmaps-8339b248-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138017703s
Jan 11 11:50:15.299: INFO: Pod "pod-configmaps-8339b248-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158126464s
Jan 11 11:50:17.319: INFO: Pod "pod-configmaps-8339b248-3468-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178152112s
STEP: Saw pod success
Jan 11 11:50:17.319: INFO: Pod "pod-configmaps-8339b248-3468-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:50:17.325: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8339b248-3468-11ea-b0bd-0242ac110005 container env-test: 
STEP: delete the pod
Jan 11 11:50:17.998: INFO: Waiting for pod pod-configmaps-8339b248-3468-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:50:18.237: INFO: Pod pod-configmaps-8339b248-3468-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:50:18.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-rh42f" for this suite.
Jan 11 11:50:24.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:50:24.479: INFO: namespace: e2e-tests-configmap-rh42f, resource: bindings, ignored listing per whitelist
Jan 11 11:50:24.675: INFO: namespace e2e-tests-configmap-rh42f deletion completed in 6.412381101s

• [SLOW TEST:17.796 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:50:24.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-gxk2
STEP: Creating a pod to test atomic-volume-subpath
Jan 11 11:50:24.935: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gxk2" in namespace "e2e-tests-subpath-mnsk5" to be "success or failure"
Jan 11 11:50:24.968: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Pending", Reason="", readiness=false. Elapsed: 33.59669ms
Jan 11 11:50:26.992: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056980142s
Jan 11 11:50:29.013: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078303358s
Jan 11 11:50:31.221: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.286042493s
Jan 11 11:50:33.247: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.312187988s
Jan 11 11:50:35.254: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.319323958s
Jan 11 11:50:37.869: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.934346274s
Jan 11 11:50:39.877: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.942183318s
Jan 11 11:50:41.898: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Running", Reason="", readiness=false. Elapsed: 16.962943943s
Jan 11 11:50:43.920: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Running", Reason="", readiness=false. Elapsed: 18.985405863s
Jan 11 11:50:45.935: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Running", Reason="", readiness=false. Elapsed: 21.000587752s
Jan 11 11:50:47.961: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Running", Reason="", readiness=false. Elapsed: 23.026298834s
Jan 11 11:50:49.983: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Running", Reason="", readiness=false. Elapsed: 25.048813673s
Jan 11 11:50:52.040: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Running", Reason="", readiness=false. Elapsed: 27.105679447s
Jan 11 11:50:54.061: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Running", Reason="", readiness=false. Elapsed: 29.125883963s
Jan 11 11:50:56.072: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Running", Reason="", readiness=false. Elapsed: 31.137735479s
Jan 11 11:50:58.091: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Running", Reason="", readiness=false. Elapsed: 33.156366909s
Jan 11 11:51:00.111: INFO: Pod "pod-subpath-test-downwardapi-gxk2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.176197264s
STEP: Saw pod success
Jan 11 11:51:00.111: INFO: Pod "pod-subpath-test-downwardapi-gxk2" satisfied condition "success or failure"
Jan 11 11:51:00.131: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-gxk2 container test-container-subpath-downwardapi-gxk2: 
STEP: delete the pod
Jan 11 11:51:00.368: INFO: Waiting for pod pod-subpath-test-downwardapi-gxk2 to disappear
Jan 11 11:51:00.405: INFO: Pod pod-subpath-test-downwardapi-gxk2 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-gxk2
Jan 11 11:51:00.405: INFO: Deleting pod "pod-subpath-test-downwardapi-gxk2" in namespace "e2e-tests-subpath-mnsk5"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:51:00.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-mnsk5" for this suite.
Jan 11 11:51:06.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:51:06.741: INFO: namespace: e2e-tests-subpath-mnsk5, resource: bindings, ignored listing per whitelist
Jan 11 11:51:06.768: INFO: namespace e2e-tests-subpath-mnsk5 deletion completed in 6.335032051s

• [SLOW TEST:42.093 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:51:06.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 11 11:51:06.956: INFO: Waiting up to 5m0s for pod "pod-a6e44251-3468-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-5g665" to be "success or failure"
Jan 11 11:51:06.993: INFO: Pod "pod-a6e44251-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.781194ms
Jan 11 11:51:09.016: INFO: Pod "pod-a6e44251-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060342691s
Jan 11 11:51:11.046: INFO: Pod "pod-a6e44251-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090516094s
Jan 11 11:51:13.073: INFO: Pod "pod-a6e44251-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117666446s
Jan 11 11:51:15.130: INFO: Pod "pod-a6e44251-3468-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174162057s
Jan 11 11:51:17.148: INFO: Pod "pod-a6e44251-3468-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.192067134s
STEP: Saw pod success
Jan 11 11:51:17.148: INFO: Pod "pod-a6e44251-3468-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:51:17.182: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a6e44251-3468-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 11:51:17.286: INFO: Waiting for pod pod-a6e44251-3468-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:51:17.293: INFO: Pod pod-a6e44251-3468-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:51:17.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5g665" for this suite.
Jan 11 11:51:23.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:51:23.504: INFO: namespace: e2e-tests-emptydir-5g665, resource: bindings, ignored listing per whitelist
Jan 11 11:51:23.657: INFO: namespace e2e-tests-emptydir-5g665 deletion completed in 6.263089381s

• [SLOW TEST:16.888 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:51:23.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:52:24.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-bvwmw" for this suite.
Jan 11 11:52:48.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:52:48.117: INFO: namespace: e2e-tests-container-probe-bvwmw, resource: bindings, ignored listing per whitelist
Jan 11 11:52:48.270: INFO: namespace e2e-tests-container-probe-bvwmw deletion completed in 24.223874969s

• [SLOW TEST:84.612 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:52:48.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-tvmd5
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-tvmd5
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-tvmd5
Jan 11 11:52:48.505: INFO: Found 0 stateful pods, waiting for 1
Jan 11 11:52:58.529: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 11 11:52:58.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tvmd5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 11 11:52:59.271: INFO: stderr: "I0111 11:52:58.844812    1559 log.go:172] (0xc0006620b0) (0xc000586780) Create stream\nI0111 11:52:58.844912    1559 log.go:172] (0xc0006620b0) (0xc000586780) Stream added, broadcasting: 1\nI0111 11:52:58.850327    1559 log.go:172] (0xc0006620b0) Reply frame received for 1\nI0111 11:52:58.850366    1559 log.go:172] (0xc0006620b0) (0xc0002d8a00) Create stream\nI0111 11:52:58.850374    1559 log.go:172] (0xc0006620b0) (0xc0002d8a00) Stream added, broadcasting: 3\nI0111 11:52:58.851843    1559 log.go:172] (0xc0006620b0) Reply frame received for 3\nI0111 11:52:58.851876    1559 log.go:172] (0xc0006620b0) (0xc0005d6000) Create stream\nI0111 11:52:58.851888    1559 log.go:172] (0xc0006620b0) (0xc0005d6000) Stream added, broadcasting: 5\nI0111 11:52:58.853518    1559 log.go:172] (0xc0006620b0) Reply frame received for 5\nI0111 11:52:59.130051    1559 log.go:172] (0xc0006620b0) Data frame received for 3\nI0111 11:52:59.130103    1559 log.go:172] (0xc0002d8a00) (3) Data frame handling\nI0111 11:52:59.130114    1559 log.go:172] (0xc0002d8a00) (3) Data frame sent\nI0111 11:52:59.265140    1559 log.go:172] (0xc0006620b0) Data frame received for 1\nI0111 11:52:59.265242    1559 log.go:172] (0xc000586780) (1) Data frame handling\nI0111 11:52:59.265270    1559 log.go:172] (0xc000586780) (1) Data frame sent\nI0111 11:52:59.265288    1559 log.go:172] (0xc0006620b0) (0xc000586780) Stream removed, broadcasting: 1\nI0111 11:52:59.265428    1559 log.go:172] (0xc0006620b0) (0xc0002d8a00) Stream removed, broadcasting: 3\nI0111 11:52:59.266225    1559 log.go:172] (0xc0006620b0) (0xc0005d6000) Stream removed, broadcasting: 5\nI0111 11:52:59.266307    1559 log.go:172] (0xc0006620b0) (0xc000586780) Stream removed, broadcasting: 1\nI0111 11:52:59.266335    1559 log.go:172] (0xc0006620b0) (0xc0002d8a00) Stream removed, broadcasting: 3\nI0111 11:52:59.266353    1559 log.go:172] (0xc0006620b0) (0xc0005d6000) Stream removed, broadcasting: 5\n"
Jan 11 11:52:59.272: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 11 11:52:59.272: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 11 11:52:59.285: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 11 11:53:09.304: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 11:53:09.304: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 11:53:09.356: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999579s
Jan 11 11:53:10.388: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.98731034s
Jan 11 11:53:11.406: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.954925796s
Jan 11 11:53:12.422: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.937793226s
Jan 11 11:53:13.439: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.92083828s
Jan 11 11:53:14.462: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.903838591s
Jan 11 11:53:15.471: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.8810436s
Jan 11 11:53:16.502: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.872356195s
Jan 11 11:53:17.517: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.841317388s
Jan 11 11:53:18.548: INFO: Verifying statefulset ss doesn't scale past 1 for another 825.850429ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-tvmd5
Jan 11 11:53:19.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tvmd5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 11 11:53:20.184: INFO: stderr: "I0111 11:53:19.843696    1580 log.go:172] (0xc00014c840) (0xc000679360) Create stream\nI0111 11:53:19.843932    1580 log.go:172] (0xc00014c840) (0xc000679360) Stream added, broadcasting: 1\nI0111 11:53:19.853881    1580 log.go:172] (0xc00014c840) Reply frame received for 1\nI0111 11:53:19.853948    1580 log.go:172] (0xc00014c840) (0xc00075e000) Create stream\nI0111 11:53:19.853968    1580 log.go:172] (0xc00014c840) (0xc00075e000) Stream added, broadcasting: 3\nI0111 11:53:19.857441    1580 log.go:172] (0xc00014c840) Reply frame received for 3\nI0111 11:53:19.857537    1580 log.go:172] (0xc00014c840) (0xc00057e000) Create stream\nI0111 11:53:19.857557    1580 log.go:172] (0xc00014c840) (0xc00057e000) Stream added, broadcasting: 5\nI0111 11:53:19.860129    1580 log.go:172] (0xc00014c840) Reply frame received for 5\nI0111 11:53:20.033496    1580 log.go:172] (0xc00014c840) Data frame received for 3\nI0111 11:53:20.033624    1580 log.go:172] (0xc00075e000) (3) Data frame handling\nI0111 11:53:20.033655    1580 log.go:172] (0xc00075e000) (3) Data frame sent\nI0111 11:53:20.175619    1580 log.go:172] (0xc00014c840) (0xc00075e000) Stream removed, broadcasting: 3\nI0111 11:53:20.175949    1580 log.go:172] (0xc00014c840) Data frame received for 1\nI0111 11:53:20.175986    1580 log.go:172] (0xc000679360) (1) Data frame handling\nI0111 11:53:20.176027    1580 log.go:172] (0xc000679360) (1) Data frame sent\nI0111 11:53:20.176051    1580 log.go:172] (0xc00014c840) (0xc000679360) Stream removed, broadcasting: 1\nI0111 11:53:20.176177    1580 log.go:172] (0xc00014c840) (0xc00057e000) Stream removed, broadcasting: 5\nI0111 11:53:20.176437    1580 log.go:172] (0xc00014c840) Go away received\nI0111 11:53:20.176724    1580 log.go:172] (0xc00014c840) (0xc000679360) Stream removed, broadcasting: 1\nI0111 11:53:20.176747    1580 log.go:172] (0xc00014c840) (0xc00075e000) Stream removed, broadcasting: 3\nI0111 11:53:20.176759    1580 log.go:172] (0xc00014c840) (0xc00057e000) Stream removed, broadcasting: 5\n"
Jan 11 11:53:20.184: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 11 11:53:20.184: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 11 11:53:20.203: INFO: Found 1 stateful pods, waiting for 3
Jan 11 11:53:30.218: INFO: Found 2 stateful pods, waiting for 3
Jan 11 11:53:40.214: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 11:53:40.214: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 11:53:40.214: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 11 11:53:50.228: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 11:53:50.228: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 11:53:50.228: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 11 11:53:50.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tvmd5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 11 11:53:50.888: INFO: stderr: "I0111 11:53:50.545565    1602 log.go:172] (0xc00068e2c0) (0xc0006ce780) Create stream\nI0111 11:53:50.545737    1602 log.go:172] (0xc00068e2c0) (0xc0006ce780) Stream added, broadcasting: 1\nI0111 11:53:50.556073    1602 log.go:172] (0xc00068e2c0) Reply frame received for 1\nI0111 11:53:50.556111    1602 log.go:172] (0xc00068e2c0) (0xc00034c780) Create stream\nI0111 11:53:50.556129    1602 log.go:172] (0xc00068e2c0) (0xc00034c780) Stream added, broadcasting: 3\nI0111 11:53:50.557469    1602 log.go:172] (0xc00068e2c0) Reply frame received for 3\nI0111 11:53:50.557542    1602 log.go:172] (0xc00068e2c0) (0xc00065ac80) Create stream\nI0111 11:53:50.557575    1602 log.go:172] (0xc00068e2c0) (0xc00065ac80) Stream added, broadcasting: 5\nI0111 11:53:50.559649    1602 log.go:172] (0xc00068e2c0) Reply frame received for 5\nI0111 11:53:50.692058    1602 log.go:172] (0xc00068e2c0) Data frame received for 3\nI0111 11:53:50.692103    1602 log.go:172] (0xc00034c780) (3) Data frame handling\nI0111 11:53:50.692137    1602 log.go:172] (0xc00034c780) (3) Data frame sent\nI0111 11:53:50.879713    1602 log.go:172] (0xc00068e2c0) (0xc00034c780) Stream removed, broadcasting: 3\nI0111 11:53:50.879934    1602 log.go:172] (0xc00068e2c0) Data frame received for 1\nI0111 11:53:50.879951    1602 log.go:172] (0xc0006ce780) (1) Data frame handling\nI0111 11:53:50.879979    1602 log.go:172] (0xc0006ce780) (1) Data frame sent\nI0111 11:53:50.880006    1602 log.go:172] (0xc00068e2c0) (0xc0006ce780) Stream removed, broadcasting: 1\nI0111 11:53:50.880206    1602 log.go:172] (0xc00068e2c0) (0xc00065ac80) Stream removed, broadcasting: 5\nI0111 11:53:50.880303    1602 log.go:172] (0xc00068e2c0) Go away received\nI0111 11:53:50.880405    1602 log.go:172] (0xc00068e2c0) (0xc0006ce780) Stream removed, broadcasting: 1\nI0111 11:53:50.880429    1602 log.go:172] (0xc00068e2c0) (0xc00034c780) Stream removed, broadcasting: 3\nI0111 11:53:50.880437    1602 log.go:172] (0xc00068e2c0) (0xc00065ac80) Stream removed, broadcasting: 5\n"
Jan 11 11:53:50.888: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 11 11:53:50.888: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 11 11:53:50.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tvmd5 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 11 11:53:51.451: INFO: stderr: "I0111 11:53:51.146985    1623 log.go:172] (0xc00013a6e0) (0xc000770640) Create stream\nI0111 11:53:51.147225    1623 log.go:172] (0xc00013a6e0) (0xc000770640) Stream added, broadcasting: 1\nI0111 11:53:51.153454    1623 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0111 11:53:51.153521    1623 log.go:172] (0xc00013a6e0) (0xc000686d20) Create stream\nI0111 11:53:51.153543    1623 log.go:172] (0xc00013a6e0) (0xc000686d20) Stream added, broadcasting: 3\nI0111 11:53:51.154435    1623 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0111 11:53:51.154453    1623 log.go:172] (0xc00013a6e0) (0xc0007706e0) Create stream\nI0111 11:53:51.154458    1623 log.go:172] (0xc00013a6e0) (0xc0007706e0) Stream added, broadcasting: 5\nI0111 11:53:51.156627    1623 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0111 11:53:51.323234    1623 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0111 11:53:51.323293    1623 log.go:172] (0xc000686d20) (3) Data frame handling\nI0111 11:53:51.323322    1623 log.go:172] (0xc000686d20) (3) Data frame sent\nI0111 11:53:51.443957    1623 log.go:172] (0xc00013a6e0) (0xc0007706e0) Stream removed, broadcasting: 5\nI0111 11:53:51.444049    1623 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0111 11:53:51.444080    1623 log.go:172] (0xc00013a6e0) (0xc000686d20) Stream removed, broadcasting: 3\nI0111 11:53:51.444110    1623 log.go:172] (0xc000770640) (1) Data frame handling\nI0111 11:53:51.444131    1623 log.go:172] (0xc000770640) (1) Data frame sent\nI0111 11:53:51.444171    1623 log.go:172] (0xc00013a6e0) (0xc000770640) Stream removed, broadcasting: 1\nI0111 11:53:51.444190    1623 log.go:172] (0xc00013a6e0) Go away received\nI0111 11:53:51.444729    1623 log.go:172] (0xc00013a6e0) (0xc000770640) Stream removed, broadcasting: 1\nI0111 11:53:51.444748    1623 log.go:172] (0xc00013a6e0) (0xc000686d20) Stream removed, broadcasting: 3\nI0111 11:53:51.444758    1623 log.go:172] (0xc00013a6e0) (0xc0007706e0) Stream removed, broadcasting: 5\n"
Jan 11 11:53:51.451: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 11 11:53:51.451: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 11 11:53:51.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tvmd5 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 11 11:53:51.878: INFO: stderr: "I0111 11:53:51.608023    1645 log.go:172] (0xc0006b0580) (0xc0003d4d20) Create stream\nI0111 11:53:51.608085    1645 log.go:172] (0xc0006b0580) (0xc0003d4d20) Stream added, broadcasting: 1\nI0111 11:53:51.612715    1645 log.go:172] (0xc0006b0580) Reply frame received for 1\nI0111 11:53:51.612737    1645 log.go:172] (0xc0006b0580) (0xc000544500) Create stream\nI0111 11:53:51.612744    1645 log.go:172] (0xc0006b0580) (0xc000544500) Stream added, broadcasting: 3\nI0111 11:53:51.613488    1645 log.go:172] (0xc0006b0580) Reply frame received for 3\nI0111 11:53:51.613508    1645 log.go:172] (0xc0006b0580) (0xc0000ca320) Create stream\nI0111 11:53:51.613519    1645 log.go:172] (0xc0006b0580) (0xc0000ca320) Stream added, broadcasting: 5\nI0111 11:53:51.615281    1645 log.go:172] (0xc0006b0580) Reply frame received for 5\nI0111 11:53:51.744910    1645 log.go:172] (0xc0006b0580) Data frame received for 3\nI0111 11:53:51.744967    1645 log.go:172] (0xc000544500) (3) Data frame handling\nI0111 11:53:51.744988    1645 log.go:172] (0xc000544500) (3) Data frame sent\nI0111 11:53:51.871863    1645 log.go:172] (0xc0006b0580) (0xc000544500) Stream removed, broadcasting: 3\nI0111 11:53:51.872103    1645 log.go:172] (0xc0006b0580) Data frame received for 1\nI0111 11:53:51.872123    1645 log.go:172] (0xc0003d4d20) (1) Data frame handling\nI0111 11:53:51.872136    1645 log.go:172] (0xc0003d4d20) (1) Data frame sent\nI0111 11:53:51.872185    1645 log.go:172] (0xc0006b0580) (0xc0000ca320) Stream removed, broadcasting: 5\nI0111 11:53:51.872214    1645 log.go:172] (0xc0006b0580) (0xc0003d4d20) Stream removed, broadcasting: 1\nI0111 11:53:51.872225    1645 log.go:172] (0xc0006b0580) Go away received\nI0111 11:53:51.872434    1645 log.go:172] (0xc0006b0580) (0xc0003d4d20) Stream removed, broadcasting: 1\nI0111 11:53:51.872472    1645 log.go:172] (0xc0006b0580) (0xc000544500) Stream removed, broadcasting: 3\nI0111 11:53:51.872502    1645 log.go:172] (0xc0006b0580) (0xc0000ca320) Stream removed, broadcasting: 5\n"
Jan 11 11:53:51.879: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 11 11:53:51.879: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 11 11:53:51.879: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 11:53:51.904: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 11 11:54:01.937: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 11:54:01.937: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 11:54:01.937: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 11:54:01.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999712s
Jan 11 11:54:02.998: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986470617s
Jan 11 11:54:04.039: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969458804s
Jan 11 11:54:05.104: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.928901002s
Jan 11 11:54:06.114: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.864056534s
Jan 11 11:54:07.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.853542267s
Jan 11 11:54:08.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.831230186s
Jan 11 11:54:09.167: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.818148742s
Jan 11 11:54:10.194: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.800633599s
Jan 11 11:54:11.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 774.114344ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-tvmd5
Jan 11 11:54:12.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tvmd5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 11 11:54:12.999: INFO: stderr: "I0111 11:54:12.551121    1667 log.go:172] (0xc000398370) (0xc0005a12c0) Create stream\nI0111 11:54:12.551361    1667 log.go:172] (0xc000398370) (0xc0005a12c0) Stream added, broadcasting: 1\nI0111 11:54:12.561473    1667 log.go:172] (0xc000398370) Reply frame received for 1\nI0111 11:54:12.561518    1667 log.go:172] (0xc000398370) (0xc00030e000) Create stream\nI0111 11:54:12.561531    1667 log.go:172] (0xc000398370) (0xc00030e000) Stream added, broadcasting: 3\nI0111 11:54:12.563413    1667 log.go:172] (0xc000398370) Reply frame received for 3\nI0111 11:54:12.563462    1667 log.go:172] (0xc000398370) (0xc0005a1360) Create stream\nI0111 11:54:12.563478    1667 log.go:172] (0xc000398370) (0xc0005a1360) Stream added, broadcasting: 5\nI0111 11:54:12.565705    1667 log.go:172] (0xc000398370) Reply frame received for 5\nI0111 11:54:12.827057    1667 log.go:172] (0xc000398370) Data frame received for 3\nI0111 11:54:12.827104    1667 log.go:172] (0xc00030e000) (3) Data frame handling\nI0111 11:54:12.827150    1667 log.go:172] (0xc00030e000) (3) Data frame sent\nI0111 11:54:12.990738    1667 log.go:172] (0xc000398370) (0xc00030e000) Stream removed, broadcasting: 3\nI0111 11:54:12.990952    1667 log.go:172] (0xc000398370) Data frame received for 1\nI0111 11:54:12.991049    1667 log.go:172] (0xc000398370) (0xc0005a1360) Stream removed, broadcasting: 5\nI0111 11:54:12.991089    1667 log.go:172] (0xc0005a12c0) (1) Data frame handling\nI0111 11:54:12.991097    1667 log.go:172] (0xc0005a12c0) (1) Data frame sent\nI0111 11:54:12.991103    1667 log.go:172] (0xc000398370) (0xc0005a12c0) Stream removed, broadcasting: 1\nI0111 11:54:12.991127    1667 log.go:172] (0xc000398370) Go away received\nI0111 11:54:12.991340    1667 log.go:172] (0xc000398370) (0xc0005a12c0) Stream removed, broadcasting: 1\nI0111 11:54:12.991354    1667 log.go:172] (0xc000398370) (0xc00030e000) Stream removed, broadcasting: 3\nI0111 11:54:12.991364    1667 log.go:172] (0xc000398370) (0xc0005a1360) Stream removed, broadcasting: 5\n"
Jan 11 11:54:12.999: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 11 11:54:12.999: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 11 11:54:12.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tvmd5 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 11 11:54:13.526: INFO: stderr: "I0111 11:54:13.253503    1689 log.go:172] (0xc000720370) (0xc00075c640) Create stream\nI0111 11:54:13.253779    1689 log.go:172] (0xc000720370) (0xc00075c640) Stream added, broadcasting: 1\nI0111 11:54:13.262956    1689 log.go:172] (0xc000720370) Reply frame received for 1\nI0111 11:54:13.262986    1689 log.go:172] (0xc000720370) (0xc000662d20) Create stream\nI0111 11:54:13.263022    1689 log.go:172] (0xc000720370) (0xc000662d20) Stream added, broadcasting: 3\nI0111 11:54:13.264607    1689 log.go:172] (0xc000720370) Reply frame received for 3\nI0111 11:54:13.264685    1689 log.go:172] (0xc000720370) (0xc000508000) Create stream\nI0111 11:54:13.264726    1689 log.go:172] (0xc000720370) (0xc000508000) Stream added, broadcasting: 5\nI0111 11:54:13.265798    1689 log.go:172] (0xc000720370) Reply frame received for 5\nI0111 11:54:13.393480    1689 log.go:172] (0xc000720370) Data frame received for 3\nI0111 11:54:13.393558    1689 log.go:172] (0xc000662d20) (3) Data frame handling\nI0111 11:54:13.393593    1689 log.go:172] (0xc000662d20) (3) Data frame sent\nI0111 11:54:13.517696    1689 log.go:172] (0xc000720370) Data frame received for 1\nI0111 11:54:13.517813    1689 log.go:172] (0xc00075c640) (1) Data frame handling\nI0111 11:54:13.517840    1689 log.go:172] (0xc00075c640) (1) Data frame sent\nI0111 11:54:13.517891    1689 log.go:172] (0xc000720370) (0xc00075c640) Stream removed, broadcasting: 1\nI0111 11:54:13.518007    1689 log.go:172] (0xc000720370) (0xc000662d20) Stream removed, broadcasting: 3\nI0111 11:54:13.518130    1689 log.go:172] (0xc000720370) (0xc000508000) Stream removed, broadcasting: 5\nI0111 11:54:13.518243    1689 log.go:172] (0xc000720370) (0xc00075c640) Stream removed, broadcasting: 1\nI0111 11:54:13.518308    1689 log.go:172] (0xc000720370) (0xc000662d20) Stream removed, broadcasting: 3\nI0111 11:54:13.518345    1689 log.go:172] (0xc000720370) (0xc000508000) Stream removed, broadcasting: 5\nI0111 11:54:13.518492    1689 log.go:172] (0xc000720370) Go away received\n"
Jan 11 11:54:13.526: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 11 11:54:13.526: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 11 11:54:13.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tvmd5 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 11 11:54:14.250: INFO: stderr: "I0111 11:54:13.721778    1711 log.go:172] (0xc0006e6370) (0xc000708640) Create stream\nI0111 11:54:13.721970    1711 log.go:172] (0xc0006e6370) (0xc000708640) Stream added, broadcasting: 1\nI0111 11:54:13.732490    1711 log.go:172] (0xc0006e6370) Reply frame received for 1\nI0111 11:54:13.732659    1711 log.go:172] (0xc0006e6370) (0xc0005e8be0) Create stream\nI0111 11:54:13.732671    1711 log.go:172] (0xc0006e6370) (0xc0005e8be0) Stream added, broadcasting: 3\nI0111 11:54:13.734600    1711 log.go:172] (0xc0006e6370) Reply frame received for 3\nI0111 11:54:13.734625    1711 log.go:172] (0xc0006e6370) (0xc0005e8d20) Create stream\nI0111 11:54:13.734791    1711 log.go:172] (0xc0006e6370) (0xc0005e8d20) Stream added, broadcasting: 5\nI0111 11:54:13.740544    1711 log.go:172] (0xc0006e6370) Reply frame received for 5\nI0111 11:54:13.963362    1711 log.go:172] (0xc0006e6370) Data frame received for 3\nI0111 11:54:13.963490    1711 log.go:172] (0xc0005e8be0) (3) Data frame handling\nI0111 11:54:13.963525    1711 log.go:172] (0xc0005e8be0) (3) Data frame sent\nI0111 11:54:14.244084    1711 log.go:172] (0xc0006e6370) Data frame received for 1\nI0111 11:54:14.244126    1711 log.go:172] (0xc000708640) (1) Data frame handling\nI0111 11:54:14.244137    1711 log.go:172] (0xc000708640) (1) Data frame sent\nI0111 11:54:14.245615    1711 log.go:172] (0xc0006e6370) (0xc000708640) Stream removed, broadcasting: 1\nI0111 11:54:14.245792    1711 log.go:172] (0xc0006e6370) (0xc0005e8be0) Stream removed, broadcasting: 3\nI0111 11:54:14.245989    1711 log.go:172] (0xc0006e6370) (0xc0005e8d20) Stream removed, broadcasting: 5\nI0111 11:54:14.246030    1711 log.go:172] (0xc0006e6370) Go away received\nI0111 11:54:14.246066    1711 log.go:172] (0xc0006e6370) (0xc000708640) Stream removed, broadcasting: 1\nI0111 11:54:14.246084    1711 log.go:172] (0xc0006e6370) (0xc0005e8be0) Stream removed, broadcasting: 3\nI0111 11:54:14.246095    1711 log.go:172] (0xc0006e6370) (0xc0005e8d20) Stream removed, broadcasting: 5\n"
Jan 11 11:54:14.251: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 11 11:54:14.251: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 11 11:54:14.251: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 11 11:54:44.331: INFO: Deleting all statefulset in ns e2e-tests-statefulset-tvmd5
Jan 11 11:54:44.350: INFO: Scaling statefulset ss to 0
Jan 11 11:54:44.375: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 11:54:44.379: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:54:44.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-tvmd5" for this suite.
Jan 11 11:54:50.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:54:50.697: INFO: namespace: e2e-tests-statefulset-tvmd5, resource: bindings, ignored listing per whitelist
Jan 11 11:54:50.737: INFO: namespace e2e-tests-statefulset-tvmd5 deletion completed in 6.234949151s

• [SLOW TEST:122.467 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:54:50.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-j7mkc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-j7mkc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-j7mkc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-j7mkc;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-j7mkc.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-j7mkc.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-j7mkc.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-j7mkc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-j7mkc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j7mkc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-j7mkc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j7mkc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-j7mkc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-j7mkc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-j7mkc.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-j7mkc.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-j7mkc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 13.212.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.212.13_udp@PTR;check="$$(dig +tcp +noall +answer +search 13.212.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.212.13_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-j7mkc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-j7mkc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-j7mkc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-j7mkc;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-j7mkc.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-j7mkc.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-j7mkc.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-j7mkc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-j7mkc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-j7mkc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-j7mkc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j7mkc.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-j7mkc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-j7mkc.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-j7mkc.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-j7mkc.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-j7mkc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 13.212.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.212.13_udp@PTR;check="$$(dig +tcp +noall +answer +search 13.212.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.212.13_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 11 11:55:05.326: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.335: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.342: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-j7mkc from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.353: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-j7mkc from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.359: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-j7mkc.svc from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.369: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-j7mkc.svc from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.374: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j7mkc.svc from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.378: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j7mkc.svc from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.382: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-j7mkc.svc from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.386: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-j7mkc.svc from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.389: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.394: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.399: INFO: Unable to read 10.101.212.13_udp@PTR from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.403: INFO: Unable to read 10.101.212.13_tcp@PTR from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.463: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.467: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.471: INFO: Unable to read 10.101.212.13_udp@PTR from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.474: INFO: Unable to read 10.101.212.13_tcp@PTR from pod e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005)
Jan 11 11:55:05.474: INFO: Lookups using e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-j7mkc wheezy_tcp@dns-test-service.e2e-tests-dns-j7mkc wheezy_udp@dns-test-service.e2e-tests-dns-j7mkc.svc wheezy_tcp@dns-test-service.e2e-tests-dns-j7mkc.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-j7mkc.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-j7mkc.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-j7mkc.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-j7mkc.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.212.13_udp@PTR 10.101.212.13_tcp@PTR jessie_udp@PodARecord jessie_tcp@PodARecord 10.101.212.13_udp@PTR 10.101.212.13_tcp@PTR]

Jan 11 11:55:10.896: INFO: DNS probes using e2e-tests-dns-j7mkc/dns-test-2c808fbc-3469-11ea-b0bd-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:55:11.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-j7mkc" for this suite.
Jan 11 11:55:19.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:55:19.454: INFO: namespace: e2e-tests-dns-j7mkc, resource: bindings, ignored listing per whitelist
Jan 11 11:55:19.512: INFO: namespace e2e-tests-dns-j7mkc deletion completed in 8.188488861s

• [SLOW TEST:28.774 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:55:19.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 11 11:55:19.758: INFO: Waiting up to 5m0s for pod "downward-api-3d913faa-3469-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-xqjw7" to be "success or failure"
Jan 11 11:55:19.774: INFO: Pod "downward-api-3d913faa-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.14347ms
Jan 11 11:55:21.989: INFO: Pod "downward-api-3d913faa-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230967833s
Jan 11 11:55:24.004: INFO: Pod "downward-api-3d913faa-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245813672s
Jan 11 11:55:26.020: INFO: Pod "downward-api-3d913faa-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.261999451s
Jan 11 11:55:28.088: INFO: Pod "downward-api-3d913faa-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.330319473s
Jan 11 11:55:30.246: INFO: Pod "downward-api-3d913faa-3469-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.487396685s
STEP: Saw pod success
Jan 11 11:55:30.246: INFO: Pod "downward-api-3d913faa-3469-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:55:30.257: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-3d913faa-3469-11ea-b0bd-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 11 11:55:30.417: INFO: Waiting for pod downward-api-3d913faa-3469-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:55:30.425: INFO: Pod downward-api-3d913faa-3469-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:55:30.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xqjw7" for this suite.
Jan 11 11:55:36.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:55:36.637: INFO: namespace: e2e-tests-downward-api-xqjw7, resource: bindings, ignored listing per whitelist
Jan 11 11:55:36.701: INFO: namespace e2e-tests-downward-api-xqjw7 deletion completed in 6.265079116s

• [SLOW TEST:17.189 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:55:36.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 11 11:55:36.919: INFO: Waiting up to 5m0s for pod "pod-47c86bfb-3469-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-zczw4" to be "success or failure"
Jan 11 11:55:36.930: INFO: Pod "pod-47c86bfb-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.323157ms
Jan 11 11:55:38.994: INFO: Pod "pod-47c86bfb-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074563258s
Jan 11 11:55:41.007: INFO: Pod "pod-47c86bfb-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087690921s
Jan 11 11:55:43.026: INFO: Pod "pod-47c86bfb-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10609419s
Jan 11 11:55:45.059: INFO: Pod "pod-47c86bfb-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139725094s
Jan 11 11:55:47.077: INFO: Pod "pod-47c86bfb-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.15724739s
Jan 11 11:55:49.093: INFO: Pod "pod-47c86bfb-3469-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.173529442s
STEP: Saw pod success
Jan 11 11:55:49.093: INFO: Pod "pod-47c86bfb-3469-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:55:49.097: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-47c86bfb-3469-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 11:55:49.183: INFO: Waiting for pod pod-47c86bfb-3469-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:55:49.519: INFO: Pod pod-47c86bfb-3469-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:55:49.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zczw4" for this suite.
Jan 11 11:55:55.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:55:55.642: INFO: namespace: e2e-tests-emptydir-zczw4, resource: bindings, ignored listing per whitelist
Jan 11 11:55:55.888: INFO: namespace e2e-tests-emptydir-zczw4 deletion completed in 6.355208269s

• [SLOW TEST:19.187 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:55:55.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan 11 11:55:56.099: INFO: Waiting up to 5m0s for pod "var-expansion-533b4017-3469-11ea-b0bd-0242ac110005" in namespace "e2e-tests-var-expansion-dzlkx" to be "success or failure"
Jan 11 11:55:56.250: INFO: Pod "var-expansion-533b4017-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 151.511884ms
Jan 11 11:55:58.268: INFO: Pod "var-expansion-533b4017-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168877285s
Jan 11 11:56:00.329: INFO: Pod "var-expansion-533b4017-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230472289s
Jan 11 11:56:02.962: INFO: Pod "var-expansion-533b4017-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.863437274s
Jan 11 11:56:04.976: INFO: Pod "var-expansion-533b4017-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.877660805s
Jan 11 11:56:06.990: INFO: Pod "var-expansion-533b4017-3469-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.890928933s
STEP: Saw pod success
Jan 11 11:56:06.990: INFO: Pod "var-expansion-533b4017-3469-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:56:06.995: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-533b4017-3469-11ea-b0bd-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 11 11:56:07.045: INFO: Waiting for pod var-expansion-533b4017-3469-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:56:07.049: INFO: Pod var-expansion-533b4017-3469-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:56:07.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-dzlkx" for this suite.
Jan 11 11:56:13.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:56:13.322: INFO: namespace: e2e-tests-var-expansion-dzlkx, resource: bindings, ignored listing per whitelist
Jan 11 11:56:13.334: INFO: namespace e2e-tests-var-expansion-dzlkx deletion completed in 6.275358729s

• [SLOW TEST:17.446 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:56:13.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 11 11:56:13.520: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 11 11:56:13.532: INFO: Waiting for terminating namespaces to be deleted...
Jan 11 11:56:13.536: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 11 11:56:13.552: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 11 11:56:13.552: INFO: 	Container weave ready: true, restart count 0
Jan 11 11:56:13.552: INFO: 	Container weave-npc ready: true, restart count 0
Jan 11 11:56:13.552: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 11 11:56:13.552: INFO: 	Container coredns ready: true, restart count 0
Jan 11 11:56:13.552: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 11 11:56:13.552: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 11 11:56:13.552: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 11 11:56:13.552: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 11 11:56:13.552: INFO: 	Container coredns ready: true, restart count 0
Jan 11 11:56:13.552: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 11 11:56:13.552: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 11 11:56:13.552: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-63af71c7-3469-11ea-b0bd-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-63af71c7-3469-11ea-b0bd-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-63af71c7-3469-11ea-b0bd-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:56:34.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-c92ft" for this suite.
Jan 11 11:56:58.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:56:58.389: INFO: namespace: e2e-tests-sched-pred-c92ft, resource: bindings, ignored listing per whitelist
Jan 11 11:56:58.393: INFO: namespace e2e-tests-sched-pred-c92ft deletion completed in 24.30040201s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:45.059 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:56:58.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-sfh5h
Jan 11 11:57:10.841: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-sfh5h
STEP: checking the pod's current state and verifying that restartCount is present
Jan 11 11:57:10.846: INFO: Initial restart count of pod liveness-exec is 0
Jan 11 11:57:59.773: INFO: Restart count of pod e2e-tests-container-probe-sfh5h/liveness-exec is now 1 (48.92790403s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:57:59.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-sfh5h" for this suite.
Jan 11 11:58:07.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:58:08.121: INFO: namespace: e2e-tests-container-probe-sfh5h, resource: bindings, ignored listing per whitelist
Jan 11 11:58:08.124: INFO: namespace e2e-tests-container-probe-sfh5h deletion completed in 8.21581401s

• [SLOW TEST:69.731 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:58:08.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 11 11:58:08.368: INFO: Waiting up to 5m0s for pod "pod-a210794e-3469-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-6mrss" to be "success or failure"
Jan 11 11:58:08.377: INFO: Pod "pod-a210794e-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.602125ms
Jan 11 11:58:10.389: INFO: Pod "pod-a210794e-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021386252s
Jan 11 11:58:12.400: INFO: Pod "pod-a210794e-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032354757s
Jan 11 11:58:14.417: INFO: Pod "pod-a210794e-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049309034s
Jan 11 11:58:16.436: INFO: Pod "pod-a210794e-3469-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068237458s
Jan 11 11:58:18.451: INFO: Pod "pod-a210794e-3469-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083025949s
STEP: Saw pod success
Jan 11 11:58:18.451: INFO: Pod "pod-a210794e-3469-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 11:58:18.457: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a210794e-3469-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 11:58:18.715: INFO: Waiting for pod pod-a210794e-3469-11ea-b0bd-0242ac110005 to disappear
Jan 11 11:58:18.730: INFO: Pod pod-a210794e-3469-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 11:58:18.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6mrss" for this suite.
Jan 11 11:58:26.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 11:58:26.956: INFO: namespace: e2e-tests-emptydir-6mrss, resource: bindings, ignored listing per whitelist
Jan 11 11:58:27.267: INFO: namespace e2e-tests-emptydir-6mrss deletion completed in 8.50311636s

• [SLOW TEST:19.143 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 11:58:27.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-t4mgl
Jan 11 11:58:37.625: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-t4mgl
STEP: checking the pod's current state and verifying that restartCount is present
Jan 11 11:58:37.631: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:02:38.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-t4mgl" for this suite.
Jan 11 12:02:46.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:02:47.042: INFO: namespace: e2e-tests-container-probe-t4mgl, resource: bindings, ignored listing per whitelist
Jan 11 12:02:47.075: INFO: namespace e2e-tests-container-probe-t4mgl deletion completed in 8.342816168s

• [SLOW TEST:259.808 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:02:47.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-485239c8-346a-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 11 12:02:47.296: INFO: Waiting up to 5m0s for pod "pod-secrets-48534d1e-346a-11ea-b0bd-0242ac110005" in namespace "e2e-tests-secrets-9vrq9" to be "success or failure"
Jan 11 12:02:47.314: INFO: Pod "pod-secrets-48534d1e-346a-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.066451ms
Jan 11 12:02:49.571: INFO: Pod "pod-secrets-48534d1e-346a-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.274415621s
Jan 11 12:02:51.596: INFO: Pod "pod-secrets-48534d1e-346a-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299808544s
Jan 11 12:02:53.761: INFO: Pod "pod-secrets-48534d1e-346a-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464961588s
Jan 11 12:02:55.957: INFO: Pod "pod-secrets-48534d1e-346a-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.660378497s
Jan 11 12:02:57.999: INFO: Pod "pod-secrets-48534d1e-346a-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.703153978s
Jan 11 12:03:00.055: INFO: Pod "pod-secrets-48534d1e-346a-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.758701256s
STEP: Saw pod success
Jan 11 12:03:00.055: INFO: Pod "pod-secrets-48534d1e-346a-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:03:00.063: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-48534d1e-346a-11ea-b0bd-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 11 12:03:00.432: INFO: Waiting for pod pod-secrets-48534d1e-346a-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:03:00.440: INFO: Pod pod-secrets-48534d1e-346a-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:03:00.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9vrq9" for this suite.
Jan 11 12:03:06.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:03:06.757: INFO: namespace: e2e-tests-secrets-9vrq9, resource: bindings, ignored listing per whitelist
Jan 11 12:03:06.799: INFO: namespace e2e-tests-secrets-9vrq9 deletion completed in 6.306103902s

• [SLOW TEST:19.723 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:03:06.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-fklnf
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 11 12:03:06.993: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 11 12:03:47.379: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-fklnf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 12:03:47.379: INFO: >>> kubeConfig: /root/.kube/config
I0111 12:03:47.450415       9 log.go:172] (0xc000408840) (0xc0010c65a0) Create stream
I0111 12:03:47.450457       9 log.go:172] (0xc000408840) (0xc0010c65a0) Stream added, broadcasting: 1
I0111 12:03:47.457902       9 log.go:172] (0xc000408840) Reply frame received for 1
I0111 12:03:47.457969       9 log.go:172] (0xc000408840) (0xc000be2e60) Create stream
I0111 12:03:47.457989       9 log.go:172] (0xc000408840) (0xc000be2e60) Stream added, broadcasting: 3
I0111 12:03:47.459654       9 log.go:172] (0xc000408840) Reply frame received for 3
I0111 12:03:47.459691       9 log.go:172] (0xc000408840) (0xc000691040) Create stream
I0111 12:03:47.459710       9 log.go:172] (0xc000408840) (0xc000691040) Stream added, broadcasting: 5
I0111 12:03:47.461874       9 log.go:172] (0xc000408840) Reply frame received for 5
I0111 12:03:47.667762       9 log.go:172] (0xc000408840) Data frame received for 3
I0111 12:03:47.667849       9 log.go:172] (0xc000be2e60) (3) Data frame handling
I0111 12:03:47.667974       9 log.go:172] (0xc000be2e60) (3) Data frame sent
I0111 12:03:47.862419       9 log.go:172] (0xc000408840) (0xc000be2e60) Stream removed, broadcasting: 3
I0111 12:03:47.862665       9 log.go:172] (0xc000408840) Data frame received for 1
I0111 12:03:47.862712       9 log.go:172] (0xc000408840) (0xc000691040) Stream removed, broadcasting: 5
I0111 12:03:47.862773       9 log.go:172] (0xc0010c65a0) (1) Data frame handling
I0111 12:03:47.862794       9 log.go:172] (0xc0010c65a0) (1) Data frame sent
I0111 12:03:47.862820       9 log.go:172] (0xc000408840) (0xc0010c65a0) Stream removed, broadcasting: 1
I0111 12:03:47.862857       9 log.go:172] (0xc000408840) Go away received
I0111 12:03:47.863150       9 log.go:172] (0xc000408840) (0xc0010c65a0) Stream removed, broadcasting: 1
I0111 12:03:47.863169       9 log.go:172] (0xc000408840) (0xc000be2e60) Stream removed, broadcasting: 3
I0111 12:03:47.863185       9 log.go:172] (0xc000408840) (0xc000691040) Stream removed, broadcasting: 5
Jan 11 12:03:47.863: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:03:47.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-fklnf" for this suite.
Jan 11 12:04:11.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:04:12.050: INFO: namespace: e2e-tests-pod-network-test-fklnf, resource: bindings, ignored listing per whitelist
Jan 11 12:04:12.137: INFO: namespace e2e-tests-pod-network-test-fklnf deletion completed in 24.203668614s

• [SLOW TEST:65.339 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:04:12.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 11 12:04:12.402: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan 11 12:04:12.409: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dnct2/daemonsets","resourceVersion":"17922185"},"items":null}

Jan 11 12:04:12.411: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dnct2/pods","resourceVersion":"17922185"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:04:12.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-dnct2" for this suite.
Jan 11 12:04:18.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:04:18.709: INFO: namespace: e2e-tests-daemonsets-dnct2, resource: bindings, ignored listing per whitelist
Jan 11 12:04:18.823: INFO: namespace e2e-tests-daemonsets-dnct2 deletion completed in 6.398705477s

S [SKIPPING] [6.686 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan 11 12:04:12.402: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:04:18.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 11 12:04:19.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 11 12:04:19.214: INFO: stderr: ""
Jan 11 12:04:19.214: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:04:19.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vx9q7" for this suite.
Jan 11 12:04:25.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:04:25.355: INFO: namespace: e2e-tests-kubectl-vx9q7, resource: bindings, ignored listing per whitelist
Jan 11 12:04:25.657: INFO: namespace e2e-tests-kubectl-vx9q7 deletion completed in 6.374398808s

• [SLOW TEST:6.833 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:04:25.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 11 12:04:25.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-mhrnq'
Jan 11 12:04:27.972: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 11 12:04:27.972: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 11 12:04:27.989: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 11 12:04:28.182: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 11 12:04:28.403: INFO: scanned /root for discovery docs: 
Jan 11 12:04:28.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-mhrnq'
Jan 11 12:04:55.192: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 11 12:04:55.192: INFO: stdout: "Created e2e-test-nginx-rc-25d69e3e61571370a491a78f703ded6d\nScaling up e2e-test-nginx-rc-25d69e3e61571370a491a78f703ded6d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-25d69e3e61571370a491a78f703ded6d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-25d69e3e61571370a491a78f703ded6d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 11 12:04:55.192: INFO: stdout: "Created e2e-test-nginx-rc-25d69e3e61571370a491a78f703ded6d\nScaling up e2e-test-nginx-rc-25d69e3e61571370a491a78f703ded6d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-25d69e3e61571370a491a78f703ded6d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-25d69e3e61571370a491a78f703ded6d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 11 12:04:55.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-mhrnq'
Jan 11 12:04:55.347: INFO: stderr: ""
Jan 11 12:04:55.347: INFO: stdout: "e2e-test-nginx-rc-25d69e3e61571370a491a78f703ded6d-sqxl6 "
Jan 11 12:04:55.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-25d69e3e61571370a491a78f703ded6d-sqxl6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mhrnq'
Jan 11 12:04:55.468: INFO: stderr: ""
Jan 11 12:04:55.468: INFO: stdout: "true"
Jan 11 12:04:55.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-25d69e3e61571370a491a78f703ded6d-sqxl6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mhrnq'
Jan 11 12:04:55.564: INFO: stderr: ""
Jan 11 12:04:55.564: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 11 12:04:55.564: INFO: e2e-test-nginx-rc-25d69e3e61571370a491a78f703ded6d-sqxl6 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan 11 12:04:55.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-mhrnq'
Jan 11 12:04:55.712: INFO: stderr: ""
Jan 11 12:04:55.712: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:04:55.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mhrnq" for this suite.
Jan 11 12:05:03.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:05:04.039: INFO: namespace: e2e-tests-kubectl-mhrnq, resource: bindings, ignored listing per whitelist
Jan 11 12:05:04.089: INFO: namespace e2e-tests-kubectl-mhrnq deletion completed in 8.360717598s

• [SLOW TEST:38.431 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:05:04.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan 11 12:05:12.380: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:05:55.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-6v5fj" for this suite.
Jan 11 12:06:02.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:06:02.094: INFO: namespace: e2e-tests-namespaces-6v5fj, resource: bindings, ignored listing per whitelist
Jan 11 12:06:02.302: INFO: namespace e2e-tests-namespaces-6v5fj deletion completed in 6.297315196s
STEP: Destroying namespace "e2e-tests-nsdeletetest-7xrm9" for this suite.
Jan 11 12:06:02.305: INFO: Namespace e2e-tests-nsdeletetest-7xrm9 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-2wh8f" for this suite.
Jan 11 12:06:08.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:06:08.390: INFO: namespace: e2e-tests-nsdeletetest-2wh8f, resource: bindings, ignored listing per whitelist
Jan 11 12:06:08.575: INFO: namespace e2e-tests-nsdeletetest-2wh8f deletion completed in 6.269414592s

• [SLOW TEST:64.485 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:06:08.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8jwdx
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 11 12:06:08.796: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 11 12:06:43.084: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-8jwdx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 12:06:43.084: INFO: >>> kubeConfig: /root/.kube/config
I0111 12:06:43.178433       9 log.go:172] (0xc000789970) (0xc0020341e0) Create stream
I0111 12:06:43.178482       9 log.go:172] (0xc000789970) (0xc0020341e0) Stream added, broadcasting: 1
I0111 12:06:43.186683       9 log.go:172] (0xc000789970) Reply frame received for 1
I0111 12:06:43.186731       9 log.go:172] (0xc000789970) (0xc001e55e00) Create stream
I0111 12:06:43.186750       9 log.go:172] (0xc000789970) (0xc001e55e00) Stream added, broadcasting: 3
I0111 12:06:43.188544       9 log.go:172] (0xc000789970) Reply frame received for 3
I0111 12:06:43.188602       9 log.go:172] (0xc000789970) (0xc002034280) Create stream
I0111 12:06:43.188620       9 log.go:172] (0xc000789970) (0xc002034280) Stream added, broadcasting: 5
I0111 12:06:43.191392       9 log.go:172] (0xc000789970) Reply frame received for 5
I0111 12:06:43.470079       9 log.go:172] (0xc000789970) Data frame received for 3
I0111 12:06:43.470132       9 log.go:172] (0xc001e55e00) (3) Data frame handling
I0111 12:06:43.470162       9 log.go:172] (0xc001e55e00) (3) Data frame sent
I0111 12:06:43.626720       9 log.go:172] (0xc000789970) Data frame received for 1
I0111 12:06:43.626824       9 log.go:172] (0xc0020341e0) (1) Data frame handling
I0111 12:06:43.626860       9 log.go:172] (0xc0020341e0) (1) Data frame sent
I0111 12:06:43.631815       9 log.go:172] (0xc000789970) (0xc002034280) Stream removed, broadcasting: 5
I0111 12:06:43.631939       9 log.go:172] (0xc000789970) (0xc001e55e00) Stream removed, broadcasting: 3
I0111 12:06:43.632046       9 log.go:172] (0xc000789970) (0xc0020341e0) Stream removed, broadcasting: 1
I0111 12:06:43.632072       9 log.go:172] (0xc000789970) Go away received
I0111 12:06:43.632529       9 log.go:172] (0xc000789970) (0xc0020341e0) Stream removed, broadcasting: 1
I0111 12:06:43.632558       9 log.go:172] (0xc000789970) (0xc001e55e00) Stream removed, broadcasting: 3
I0111 12:06:43.632575       9 log.go:172] (0xc000789970) (0xc002034280) Stream removed, broadcasting: 5
Jan 11 12:06:43.632: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:06:43.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-8jwdx" for this suite.
Jan 11 12:07:07.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:07:07.792: INFO: namespace: e2e-tests-pod-network-test-8jwdx, resource: bindings, ignored listing per whitelist
Jan 11 12:07:07.860: INFO: namespace e2e-tests-pod-network-test-8jwdx deletion completed in 24.210214563s

• [SLOW TEST:59.285 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:07:07.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-vjvvm
Jan 11 12:07:18.075: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-vjvvm
STEP: checking the pod's current state and verifying that restartCount is present
Jan 11 12:07:18.081: INFO: Initial restart count of pod liveness-http is 0
Jan 11 12:07:38.468: INFO: Restart count of pod e2e-tests-container-probe-vjvvm/liveness-http is now 1 (20.38781631s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:07:38.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-vjvvm" for this suite.
Jan 11 12:07:44.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:07:44.937: INFO: namespace: e2e-tests-container-probe-vjvvm, resource: bindings, ignored listing per whitelist
Jan 11 12:07:45.064: INFO: namespace e2e-tests-container-probe-vjvvm deletion completed in 6.397680674s

• [SLOW TEST:37.203 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:07:45.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:07:53.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-w9gt6" for this suite.
Jan 11 12:08:47.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:08:47.462: INFO: namespace: e2e-tests-kubelet-test-w9gt6, resource: bindings, ignored listing per whitelist
Jan 11 12:08:47.556: INFO: namespace e2e-tests-kubelet-test-w9gt6 deletion completed in 54.215121666s

• [SLOW TEST:62.491 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:08:47.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 11 12:08:47.976: INFO: Number of nodes with available pods: 0
Jan 11 12:08:47.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:08:49.489: INFO: Number of nodes with available pods: 0
Jan 11 12:08:49.490: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:08:50.189: INFO: Number of nodes with available pods: 0
Jan 11 12:08:50.189: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:08:51.012: INFO: Number of nodes with available pods: 0
Jan 11 12:08:51.012: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:08:52.004: INFO: Number of nodes with available pods: 0
Jan 11 12:08:52.004: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:08:54.156: INFO: Number of nodes with available pods: 0
Jan 11 12:08:54.156: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:08:54.992: INFO: Number of nodes with available pods: 0
Jan 11 12:08:54.992: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:08:56.157: INFO: Number of nodes with available pods: 0
Jan 11 12:08:56.157: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:08:57.010: INFO: Number of nodes with available pods: 0
Jan 11 12:08:57.010: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:08:58.004: INFO: Number of nodes with available pods: 1
Jan 11 12:08:58.004: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 11 12:08:58.199: INFO: Number of nodes with available pods: 1
Jan 11 12:08:58.199: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-hc9pg, will wait for the garbage collector to delete the pods
Jan 11 12:08:59.646: INFO: Deleting DaemonSet.extensions daemon-set took: 19.400155ms
Jan 11 12:08:59.946: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.423687ms
Jan 11 12:09:01.654: INFO: Number of nodes with available pods: 0
Jan 11 12:09:01.654: INFO: Number of running nodes: 0, number of available pods: 0
Jan 11 12:09:01.747: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hc9pg/daemonsets","resourceVersion":"17922820"},"items":null}

Jan 11 12:09:01.759: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hc9pg/pods","resourceVersion":"17922820"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:09:01.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-hc9pg" for this suite.
Jan 11 12:09:07.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:09:07.916: INFO: namespace: e2e-tests-daemonsets-hc9pg, resource: bindings, ignored listing per whitelist
Jan 11 12:09:07.970: INFO: namespace e2e-tests-daemonsets-hc9pg deletion completed in 6.182001367s

• [SLOW TEST:20.414 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:09:07.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-2b5751f5-346b-11ea-b0bd-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-2b5751d0-346b-11ea-b0bd-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 11 12:09:08.247: INFO: Waiting up to 5m0s for pod "projected-volume-2b57515f-346b-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-p6s47" to be "success or failure"
Jan 11 12:09:08.256: INFO: Pod "projected-volume-2b57515f-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.630927ms
Jan 11 12:09:10.279: INFO: Pod "projected-volume-2b57515f-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032003001s
Jan 11 12:09:12.323: INFO: Pod "projected-volume-2b57515f-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076047562s
Jan 11 12:09:14.338: INFO: Pod "projected-volume-2b57515f-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091058144s
Jan 11 12:09:16.348: INFO: Pod "projected-volume-2b57515f-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101153538s
Jan 11 12:09:18.654: INFO: Pod "projected-volume-2b57515f-346b-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.406747474s
STEP: Saw pod success
Jan 11 12:09:18.654: INFO: Pod "projected-volume-2b57515f-346b-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:09:18.662: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-2b57515f-346b-11ea-b0bd-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan 11 12:09:19.044: INFO: Waiting for pod projected-volume-2b57515f-346b-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:09:19.084: INFO: Pod projected-volume-2b57515f-346b-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:09:19.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p6s47" for this suite.
Jan 11 12:09:25.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:09:25.294: INFO: namespace: e2e-tests-projected-p6s47, resource: bindings, ignored listing per whitelist
Jan 11 12:09:25.402: INFO: namespace e2e-tests-projected-p6s47 deletion completed in 6.311545368s

• [SLOW TEST:17.432 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:09:25.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-35bef040-346b-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 11 12:09:25.714: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-35c0ad40-346b-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-4f4g9" to be "success or failure"
Jan 11 12:09:25.733: INFO: Pod "pod-projected-secrets-35c0ad40-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.949878ms
Jan 11 12:09:27.914: INFO: Pod "pod-projected-secrets-35c0ad40-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199635683s
Jan 11 12:09:30.088: INFO: Pod "pod-projected-secrets-35c0ad40-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.373769195s
Jan 11 12:09:32.115: INFO: Pod "pod-projected-secrets-35c0ad40-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.400118055s
Jan 11 12:09:34.125: INFO: Pod "pod-projected-secrets-35c0ad40-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.410232627s
Jan 11 12:09:36.136: INFO: Pod "pod-projected-secrets-35c0ad40-346b-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.421115573s
STEP: Saw pod success
Jan 11 12:09:36.136: INFO: Pod "pod-projected-secrets-35c0ad40-346b-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:09:36.140: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-35c0ad40-346b-11ea-b0bd-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 11 12:09:36.192: INFO: Waiting for pod pod-projected-secrets-35c0ad40-346b-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:09:37.187: INFO: Pod pod-projected-secrets-35c0ad40-346b-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:09:37.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4f4g9" for this suite.
Jan 11 12:09:43.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:09:44.222: INFO: namespace: e2e-tests-projected-4f4g9, resource: bindings, ignored listing per whitelist
Jan 11 12:09:44.243: INFO: namespace e2e-tests-projected-4f4g9 deletion completed in 6.463101139s

• [SLOW TEST:18.840 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:09:44.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 11 12:09:44.401: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 11 12:09:44.409: INFO: Waiting for terminating namespaces to be deleted...
Jan 11 12:09:44.411: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 11 12:09:44.422: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 11 12:09:44.422: INFO: 	Container coredns ready: true, restart count 0
Jan 11 12:09:44.422: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 11 12:09:44.422: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 11 12:09:44.422: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 11 12:09:44.422: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 11 12:09:44.422: INFO: 	Container coredns ready: true, restart count 0
Jan 11 12:09:44.422: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 11 12:09:44.422: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 11 12:09:44.422: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 11 12:09:44.422: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 11 12:09:44.422: INFO: 	Container weave ready: true, restart count 0
Jan 11 12:09:44.422: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan 11 12:09:44.531: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 11 12:09:44.531: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 11 12:09:44.531: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 11 12:09:44.531: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan 11 12:09:44.531: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan 11 12:09:44.531: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 11 12:09:44.531: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 11 12:09:44.531: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-41069522-346b-11ea-b0bd-0242ac110005.15e8d3d5cf483d67], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-cbvt7/filler-pod-41069522-346b-11ea-b0bd-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-41069522-346b-11ea-b0bd-0242ac110005.15e8d3d6bf73699c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-41069522-346b-11ea-b0bd-0242ac110005.15e8d3d7350eddea], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-41069522-346b-11ea-b0bd-0242ac110005.15e8d3d75eaaf2e6], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e8d3d7ac1f5141], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:09:53.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-cbvt7" for this suite.
Jan 11 12:10:01.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:10:02.096: INFO: namespace: e2e-tests-sched-pred-cbvt7, resource: bindings, ignored listing per whitelist
Jan 11 12:10:02.137: INFO: namespace e2e-tests-sched-pred-cbvt7 deletion completed in 8.380844986s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:17.895 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:10:02.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-4c61d2c8-346b-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 11 12:10:03.619: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4c63aefb-346b-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-84fp6" to be "success or failure"
Jan 11 12:10:03.636: INFO: Pod "pod-projected-secrets-4c63aefb-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.190036ms
Jan 11 12:10:05.968: INFO: Pod "pod-projected-secrets-4c63aefb-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348105253s
Jan 11 12:10:07.981: INFO: Pod "pod-projected-secrets-4c63aefb-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.361411151s
Jan 11 12:10:10.053: INFO: Pod "pod-projected-secrets-4c63aefb-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433786042s
Jan 11 12:10:12.290: INFO: Pod "pod-projected-secrets-4c63aefb-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.670122758s
Jan 11 12:10:14.480: INFO: Pod "pod-projected-secrets-4c63aefb-346b-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.860704951s
STEP: Saw pod success
Jan 11 12:10:14.480: INFO: Pod "pod-projected-secrets-4c63aefb-346b-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:10:14.497: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-4c63aefb-346b-11ea-b0bd-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 11 12:10:14.982: INFO: Waiting for pod pod-projected-secrets-4c63aefb-346b-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:10:15.026: INFO: Pod pod-projected-secrets-4c63aefb-346b-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:10:15.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-84fp6" for this suite.
Jan 11 12:10:21.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:10:21.099: INFO: namespace: e2e-tests-projected-84fp6, resource: bindings, ignored listing per whitelist
Jan 11 12:10:21.231: INFO: namespace e2e-tests-projected-84fp6 deletion completed in 6.194956644s

• [SLOW TEST:19.093 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:10:21.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 11 12:10:45.717: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4tj8k PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 12:10:45.717: INFO: >>> kubeConfig: /root/.kube/config
I0111 12:10:45.817039       9 log.go:172] (0xc000408840) (0xc0024dbb80) Create stream
I0111 12:10:45.817106       9 log.go:172] (0xc000408840) (0xc0024dbb80) Stream added, broadcasting: 1
I0111 12:10:45.867698       9 log.go:172] (0xc000408840) Reply frame received for 1
I0111 12:10:45.867869       9 log.go:172] (0xc000408840) (0xc002718000) Create stream
I0111 12:10:45.867907       9 log.go:172] (0xc000408840) (0xc002718000) Stream added, broadcasting: 3
I0111 12:10:45.870336       9 log.go:172] (0xc000408840) Reply frame received for 3
I0111 12:10:45.870445       9 log.go:172] (0xc000408840) (0xc0027180a0) Create stream
I0111 12:10:45.870466       9 log.go:172] (0xc000408840) (0xc0027180a0) Stream added, broadcasting: 5
I0111 12:10:45.872198       9 log.go:172] (0xc000408840) Reply frame received for 5
I0111 12:10:46.051384       9 log.go:172] (0xc000408840) Data frame received for 3
I0111 12:10:46.051495       9 log.go:172] (0xc002718000) (3) Data frame handling
I0111 12:10:46.051550       9 log.go:172] (0xc002718000) (3) Data frame sent
I0111 12:10:46.207621       9 log.go:172] (0xc000408840) Data frame received for 1
I0111 12:10:46.207679       9 log.go:172] (0xc000408840) (0xc002718000) Stream removed, broadcasting: 3
I0111 12:10:46.207723       9 log.go:172] (0xc0024dbb80) (1) Data frame handling
I0111 12:10:46.207760       9 log.go:172] (0xc0024dbb80) (1) Data frame sent
I0111 12:10:46.207791       9 log.go:172] (0xc000408840) (0xc0027180a0) Stream removed, broadcasting: 5
I0111 12:10:46.207835       9 log.go:172] (0xc000408840) (0xc0024dbb80) Stream removed, broadcasting: 1
I0111 12:10:46.207955       9 log.go:172] (0xc000408840) (0xc0024dbb80) Stream removed, broadcasting: 1
I0111 12:10:46.207972       9 log.go:172] (0xc000408840) (0xc002718000) Stream removed, broadcasting: 3
I0111 12:10:46.207981       9 log.go:172] (0xc000408840) (0xc0027180a0) Stream removed, broadcasting: 5
I0111 12:10:46.208372       9 log.go:172] (0xc000408840) Go away received
Jan 11 12:10:46.208: INFO: Exec stderr: ""
Jan 11 12:10:46.208: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4tj8k PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 12:10:46.208: INFO: >>> kubeConfig: /root/.kube/config
I0111 12:10:46.303112       9 log.go:172] (0xc000789810) (0xc0025ce280) Create stream
I0111 12:10:46.303176       9 log.go:172] (0xc000789810) (0xc0025ce280) Stream added, broadcasting: 1
I0111 12:10:46.308686       9 log.go:172] (0xc000789810) Reply frame received for 1
I0111 12:10:46.308751       9 log.go:172] (0xc000789810) (0xc002718140) Create stream
I0111 12:10:46.308764       9 log.go:172] (0xc000789810) (0xc002718140) Stream added, broadcasting: 3
I0111 12:10:46.309934       9 log.go:172] (0xc000789810) Reply frame received for 3
I0111 12:10:46.309965       9 log.go:172] (0xc000789810) (0xc00235a000) Create stream
I0111 12:10:46.309971       9 log.go:172] (0xc000789810) (0xc00235a000) Stream added, broadcasting: 5
I0111 12:10:46.311215       9 log.go:172] (0xc000789810) Reply frame received for 5
I0111 12:10:46.431638       9 log.go:172] (0xc000789810) Data frame received for 3
I0111 12:10:46.431795       9 log.go:172] (0xc002718140) (3) Data frame handling
I0111 12:10:46.431856       9 log.go:172] (0xc002718140) (3) Data frame sent
I0111 12:10:46.633070       9 log.go:172] (0xc000789810) Data frame received for 1
I0111 12:10:46.633127       9 log.go:172] (0xc0025ce280) (1) Data frame handling
I0111 12:10:46.633159       9 log.go:172] (0xc0025ce280) (1) Data frame sent
I0111 12:10:46.633926       9 log.go:172] (0xc000789810) (0xc00235a000) Stream removed, broadcasting: 5
I0111 12:10:46.633993       9 log.go:172] (0xc000789810) (0xc0025ce280) Stream removed, broadcasting: 1
I0111 12:10:46.634107       9 log.go:172] (0xc000789810) (0xc002718140) Stream removed, broadcasting: 3
I0111 12:10:46.634153       9 log.go:172] (0xc000789810) Go away received
I0111 12:10:46.634179       9 log.go:172] (0xc000789810) (0xc0025ce280) Stream removed, broadcasting: 1
I0111 12:10:46.634206       9 log.go:172] (0xc000789810) (0xc002718140) Stream removed, broadcasting: 3
I0111 12:10:46.634227       9 log.go:172] (0xc000789810) (0xc00235a000) Stream removed, broadcasting: 5
Jan 11 12:10:46.634: INFO: Exec stderr: ""
Jan 11 12:10:46.634: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4tj8k PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 12:10:46.634: INFO: >>> kubeConfig: /root/.kube/config
I0111 12:10:46.853672       9 log.go:172] (0xc000789ce0) (0xc0025ce500) Create stream
I0111 12:10:46.853755       9 log.go:172] (0xc000789ce0) (0xc0025ce500) Stream added, broadcasting: 1
I0111 12:10:46.896027       9 log.go:172] (0xc000789ce0) Reply frame received for 1
I0111 12:10:46.896249       9 log.go:172] (0xc000789ce0) (0xc00255e0a0) Create stream
I0111 12:10:46.896296       9 log.go:172] (0xc000789ce0) (0xc00255e0a0) Stream added, broadcasting: 3
I0111 12:10:46.898599       9 log.go:172] (0xc000789ce0) Reply frame received for 3
I0111 12:10:46.898645       9 log.go:172] (0xc000789ce0) (0xc00235a0a0) Create stream
I0111 12:10:46.898676       9 log.go:172] (0xc000789ce0) (0xc00235a0a0) Stream added, broadcasting: 5
I0111 12:10:46.899926       9 log.go:172] (0xc000789ce0) Reply frame received for 5
I0111 12:10:47.041024       9 log.go:172] (0xc000789ce0) Data frame received for 3
I0111 12:10:47.041066       9 log.go:172] (0xc00255e0a0) (3) Data frame handling
I0111 12:10:47.041091       9 log.go:172] (0xc00255e0a0) (3) Data frame sent
I0111 12:10:47.169298       9 log.go:172] (0xc000789ce0) (0xc00255e0a0) Stream removed, broadcasting: 3
I0111 12:10:47.169376       9 log.go:172] (0xc000789ce0) Data frame received for 1
I0111 12:10:47.169405       9 log.go:172] (0xc0025ce500) (1) Data frame handling
I0111 12:10:47.169430       9 log.go:172] (0xc0025ce500) (1) Data frame sent
I0111 12:10:47.169443       9 log.go:172] (0xc000789ce0) (0xc0025ce500) Stream removed, broadcasting: 1
I0111 12:10:47.169459       9 log.go:172] (0xc000789ce0) (0xc00235a0a0) Stream removed, broadcasting: 5
I0111 12:10:47.169545       9 log.go:172] (0xc000789ce0) Go away received
I0111 12:10:47.169661       9 log.go:172] (0xc000789ce0) (0xc0025ce500) Stream removed, broadcasting: 1
I0111 12:10:47.169680       9 log.go:172] (0xc000789ce0) (0xc00255e0a0) Stream removed, broadcasting: 3
I0111 12:10:47.169688       9 log.go:172] (0xc000789ce0) (0xc00235a0a0) Stream removed, broadcasting: 5
Jan 11 12:10:47.169: INFO: Exec stderr: ""
Jan 11 12:10:47.169: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4tj8k PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 12:10:47.169: INFO: >>> kubeConfig: /root/.kube/config
I0111 12:10:47.249878       9 log.go:172] (0xc00094b8c0) (0xc0023ce280) Create stream
I0111 12:10:47.249932       9 log.go:172] (0xc00094b8c0) (0xc0023ce280) Stream added, broadcasting: 1
I0111 12:10:47.265900       9 log.go:172] (0xc00094b8c0) Reply frame received for 1
I0111 12:10:47.265946       9 log.go:172] (0xc00094b8c0) (0xc001ba60a0) Create stream
I0111 12:10:47.265961       9 log.go:172] (0xc00094b8c0) (0xc001ba60a0) Stream added, broadcasting: 3
I0111 12:10:47.267181       9 log.go:172] (0xc00094b8c0) Reply frame received for 3
I0111 12:10:47.267222       9 log.go:172] (0xc00094b8c0) (0xc0023ce3c0) Create stream
I0111 12:10:47.267235       9 log.go:172] (0xc00094b8c0) (0xc0023ce3c0) Stream added, broadcasting: 5
I0111 12:10:47.270271       9 log.go:172] (0xc00094b8c0) Reply frame received for 5
I0111 12:10:47.401750       9 log.go:172] (0xc00094b8c0) Data frame received for 3
I0111 12:10:47.401836       9 log.go:172] (0xc001ba60a0) (3) Data frame handling
I0111 12:10:47.401880       9 log.go:172] (0xc001ba60a0) (3) Data frame sent
I0111 12:10:47.505114       9 log.go:172] (0xc00094b8c0) Data frame received for 1
I0111 12:10:47.505186       9 log.go:172] (0xc00094b8c0) (0xc001ba60a0) Stream removed, broadcasting: 3
I0111 12:10:47.505232       9 log.go:172] (0xc0023ce280) (1) Data frame handling
I0111 12:10:47.505254       9 log.go:172] (0xc0023ce280) (1) Data frame sent
I0111 12:10:47.505285       9 log.go:172] (0xc00094b8c0) (0xc0023ce3c0) Stream removed, broadcasting: 5
I0111 12:10:47.505305       9 log.go:172] (0xc00094b8c0) (0xc0023ce280) Stream removed, broadcasting: 1
I0111 12:10:47.505320       9 log.go:172] (0xc00094b8c0) Go away received
I0111 12:10:47.505612       9 log.go:172] (0xc00094b8c0) (0xc0023ce280) Stream removed, broadcasting: 1
I0111 12:10:47.505621       9 log.go:172] (0xc00094b8c0) (0xc001ba60a0) Stream removed, broadcasting: 3
I0111 12:10:47.505625       9 log.go:172] (0xc00094b8c0) (0xc0023ce3c0) Stream removed, broadcasting: 5
Jan 11 12:10:47.505: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 11 12:10:47.505: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4tj8k PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 12:10:47.505: INFO: >>> kubeConfig: /root/.kube/config
I0111 12:10:47.571269       9 log.go:172] (0xc0004086e0) (0xc00235a320) Create stream
I0111 12:10:47.571393       9 log.go:172] (0xc0004086e0) (0xc00235a320) Stream added, broadcasting: 1
I0111 12:10:47.575538       9 log.go:172] (0xc0004086e0) Reply frame received for 1
I0111 12:10:47.575566       9 log.go:172] (0xc0004086e0) (0xc0023ce460) Create stream
I0111 12:10:47.575573       9 log.go:172] (0xc0004086e0) (0xc0023ce460) Stream added, broadcasting: 3
I0111 12:10:47.577865       9 log.go:172] (0xc0004086e0) Reply frame received for 3
I0111 12:10:47.577888       9 log.go:172] (0xc0004086e0) (0xc00255e140) Create stream
I0111 12:10:47.577896       9 log.go:172] (0xc0004086e0) (0xc00255e140) Stream added, broadcasting: 5
I0111 12:10:47.578803       9 log.go:172] (0xc0004086e0) Reply frame received for 5
I0111 12:10:47.693976       9 log.go:172] (0xc0004086e0) Data frame received for 3
I0111 12:10:47.694056       9 log.go:172] (0xc0023ce460) (3) Data frame handling
I0111 12:10:47.694115       9 log.go:172] (0xc0023ce460) (3) Data frame sent
I0111 12:10:47.829185       9 log.go:172] (0xc0004086e0) (0xc0023ce460) Stream removed, broadcasting: 3
I0111 12:10:47.829262       9 log.go:172] (0xc0004086e0) Data frame received for 1
I0111 12:10:47.829286       9 log.go:172] (0xc0004086e0) (0xc00255e140) Stream removed, broadcasting: 5
I0111 12:10:47.829334       9 log.go:172] (0xc00235a320) (1) Data frame handling
I0111 12:10:47.829352       9 log.go:172] (0xc00235a320) (1) Data frame sent
I0111 12:10:47.829368       9 log.go:172] (0xc0004086e0) (0xc00235a320) Stream removed, broadcasting: 1
I0111 12:10:47.829384       9 log.go:172] (0xc0004086e0) Go away received
I0111 12:10:47.829486       9 log.go:172] (0xc0004086e0) (0xc00235a320) Stream removed, broadcasting: 1
I0111 12:10:47.829504       9 log.go:172] (0xc0004086e0) (0xc0023ce460) Stream removed, broadcasting: 3
I0111 12:10:47.829518       9 log.go:172] (0xc0004086e0) (0xc00255e140) Stream removed, broadcasting: 5
Jan 11 12:10:47.829: INFO: Exec stderr: ""
Jan 11 12:10:47.829: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4tj8k PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 12:10:47.829: INFO: >>> kubeConfig: /root/.kube/config
I0111 12:10:47.910054       9 log.go:172] (0xc0029f24d0) (0xc00255e460) Create stream
I0111 12:10:47.910099       9 log.go:172] (0xc0029f24d0) (0xc00255e460) Stream added, broadcasting: 1
I0111 12:10:47.913412       9 log.go:172] (0xc0029f24d0) Reply frame received for 1
I0111 12:10:47.913437       9 log.go:172] (0xc0029f24d0) (0xc00235a3c0) Create stream
I0111 12:10:47.913449       9 log.go:172] (0xc0029f24d0) (0xc00235a3c0) Stream added, broadcasting: 3
I0111 12:10:47.914811       9 log.go:172] (0xc0029f24d0) Reply frame received for 3
I0111 12:10:47.914910       9 log.go:172] (0xc0029f24d0) (0xc0023ce500) Create stream
I0111 12:10:47.914951       9 log.go:172] (0xc0029f24d0) (0xc0023ce500) Stream added, broadcasting: 5
I0111 12:10:47.916240       9 log.go:172] (0xc0029f24d0) Reply frame received for 5
I0111 12:10:48.016429       9 log.go:172] (0xc0029f24d0) Data frame received for 3
I0111 12:10:48.016512       9 log.go:172] (0xc00235a3c0) (3) Data frame handling
I0111 12:10:48.016554       9 log.go:172] (0xc00235a3c0) (3) Data frame sent
I0111 12:10:48.166719       9 log.go:172] (0xc0029f24d0) (0xc00235a3c0) Stream removed, broadcasting: 3
I0111 12:10:48.166785       9 log.go:172] (0xc0029f24d0) Data frame received for 1
I0111 12:10:48.166795       9 log.go:172] (0xc00255e460) (1) Data frame handling
I0111 12:10:48.166807       9 log.go:172] (0xc00255e460) (1) Data frame sent
I0111 12:10:48.166939       9 log.go:172] (0xc0029f24d0) (0xc00255e460) Stream removed, broadcasting: 1
I0111 12:10:48.166989       9 log.go:172] (0xc0029f24d0) (0xc0023ce500) Stream removed, broadcasting: 5
I0111 12:10:48.167041       9 log.go:172] (0xc0029f24d0) Go away received
I0111 12:10:48.167070       9 log.go:172] (0xc0029f24d0) (0xc00255e460) Stream removed, broadcasting: 1
I0111 12:10:48.167095       9 log.go:172] (0xc0029f24d0) (0xc00235a3c0) Stream removed, broadcasting: 3
I0111 12:10:48.167102       9 log.go:172] (0xc0029f24d0) (0xc0023ce500) Stream removed, broadcasting: 5
Jan 11 12:10:48.167: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 11 12:10:48.167: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4tj8k PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 12:10:48.167: INFO: >>> kubeConfig: /root/.kube/config
I0111 12:10:48.238437       9 log.go:172] (0xc0029f29a0) (0xc00255e820) Create stream
I0111 12:10:48.238594       9 log.go:172] (0xc0029f29a0) (0xc00255e820) Stream added, broadcasting: 1
I0111 12:10:48.243582       9 log.go:172] (0xc0029f29a0) Reply frame received for 1
I0111 12:10:48.243639       9 log.go:172] (0xc0029f29a0) (0xc0025ce5a0) Create stream
I0111 12:10:48.243652       9 log.go:172] (0xc0029f29a0) (0xc0025ce5a0) Stream added, broadcasting: 3
I0111 12:10:48.245009       9 log.go:172] (0xc0029f29a0) Reply frame received for 3
I0111 12:10:48.245039       9 log.go:172] (0xc0029f29a0) (0xc001ba6140) Create stream
I0111 12:10:48.245052       9 log.go:172] (0xc0029f29a0) (0xc001ba6140) Stream added, broadcasting: 5
I0111 12:10:48.245985       9 log.go:172] (0xc0029f29a0) Reply frame received for 5
I0111 12:10:48.346133       9 log.go:172] (0xc0029f29a0) Data frame received for 3
I0111 12:10:48.346188       9 log.go:172] (0xc0025ce5a0) (3) Data frame handling
I0111 12:10:48.346203       9 log.go:172] (0xc0025ce5a0) (3) Data frame sent
I0111 12:10:48.524005       9 log.go:172] (0xc0029f29a0) Data frame received for 1
I0111 12:10:48.524140       9 log.go:172] (0xc00255e820) (1) Data frame handling
I0111 12:10:48.524194       9 log.go:172] (0xc00255e820) (1) Data frame sent
I0111 12:10:48.524208       9 log.go:172] (0xc0029f29a0) (0xc00255e820) Stream removed, broadcasting: 1
I0111 12:10:48.524333       9 log.go:172] (0xc0029f29a0) (0xc0025ce5a0) Stream removed, broadcasting: 3
I0111 12:10:48.524710       9 log.go:172] (0xc0029f29a0) (0xc001ba6140) Stream removed, broadcasting: 5
I0111 12:10:48.524839       9 log.go:172] (0xc0029f29a0) (0xc00255e820) Stream removed, broadcasting: 1
I0111 12:10:48.524868       9 log.go:172] (0xc0029f29a0) (0xc0025ce5a0) Stream removed, broadcasting: 3
I0111 12:10:48.524878       9 log.go:172] (0xc0029f29a0) (0xc001ba6140) Stream removed, broadcasting: 5
Jan 11 12:10:48.525: INFO: Exec stderr: ""
Jan 11 12:10:48.525: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4tj8k PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 12:10:48.525: INFO: >>> kubeConfig: /root/.kube/config
I0111 12:10:48.594194       9 log.go:172] (0xc0029f2e70) (0xc00255ebe0) Create stream
I0111 12:10:48.594523       9 log.go:172] (0xc0029f2e70) (0xc00255ebe0) Stream added, broadcasting: 1
I0111 12:10:48.600865       9 log.go:172] (0xc0029f2e70) Reply frame received for 1
I0111 12:10:48.600909       9 log.go:172] (0xc0029f2e70) (0xc00235a460) Create stream
I0111 12:10:48.600917       9 log.go:172] (0xc0029f2e70) (0xc00235a460) Stream added, broadcasting: 3
I0111 12:10:48.602971       9 log.go:172] (0xc0029f2e70) Reply frame received for 3
I0111 12:10:48.602997       9 log.go:172] (0xc0029f2e70) (0xc0025ce6e0) Create stream
I0111 12:10:48.603003       9 log.go:172] (0xc0029f2e70) (0xc0025ce6e0) Stream added, broadcasting: 5
I0111 12:10:48.603958       9 log.go:172] (0xc0029f2e70) Reply frame received for 5
I0111 12:10:48.707158       9 log.go:172] (0xc0029f2e70) Data frame received for 3
I0111 12:10:48.707248       9 log.go:172] (0xc00235a460) (3) Data frame handling
I0111 12:10:48.707275       9 log.go:172] (0xc00235a460) (3) Data frame sent
I0111 12:10:48.815261       9 log.go:172] (0xc0029f2e70) (0xc00235a460) Stream removed, broadcasting: 3
I0111 12:10:48.815342       9 log.go:172] (0xc0029f2e70) Data frame received for 1
I0111 12:10:48.815363       9 log.go:172] (0xc00255ebe0) (1) Data frame handling
I0111 12:10:48.815377       9 log.go:172] (0xc0029f2e70) (0xc0025ce6e0) Stream removed, broadcasting: 5
I0111 12:10:48.815438       9 log.go:172] (0xc00255ebe0) (1) Data frame sent
I0111 12:10:48.815471       9 log.go:172] (0xc0029f2e70) (0xc00255ebe0) Stream removed, broadcasting: 1
I0111 12:10:48.815489       9 log.go:172] (0xc0029f2e70) Go away received
I0111 12:10:48.815591       9 log.go:172] (0xc0029f2e70) (0xc00255ebe0) Stream removed, broadcasting: 1
I0111 12:10:48.815605       9 log.go:172] (0xc0029f2e70) (0xc00235a460) Stream removed, broadcasting: 3
I0111 12:10:48.815615       9 log.go:172] (0xc0029f2e70) (0xc0025ce6e0) Stream removed, broadcasting: 5
Jan 11 12:10:48.815: INFO: Exec stderr: ""
Jan 11 12:10:48.815: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4tj8k PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 12:10:48.815: INFO: >>> kubeConfig: /root/.kube/config
I0111 12:10:48.912989       9 log.go:172] (0xc00094be40) (0xc0023ce780) Create stream
I0111 12:10:48.913062       9 log.go:172] (0xc00094be40) (0xc0023ce780) Stream added, broadcasting: 1
I0111 12:10:48.917259       9 log.go:172] (0xc00094be40) Reply frame received for 1
I0111 12:10:48.917282       9 log.go:172] (0xc00094be40) (0xc00255ed20) Create stream
I0111 12:10:48.917289       9 log.go:172] (0xc00094be40) (0xc00255ed20) Stream added, broadcasting: 3
I0111 12:10:48.918213       9 log.go:172] (0xc00094be40) Reply frame received for 3
I0111 12:10:48.918233       9 log.go:172] (0xc00094be40) (0xc00235a500) Create stream
I0111 12:10:48.918239       9 log.go:172] (0xc00094be40) (0xc00235a500) Stream added, broadcasting: 5
I0111 12:10:48.919036       9 log.go:172] (0xc00094be40) Reply frame received for 5
I0111 12:10:49.013204       9 log.go:172] (0xc00094be40) Data frame received for 3
I0111 12:10:49.013275       9 log.go:172] (0xc00255ed20) (3) Data frame handling
I0111 12:10:49.013309       9 log.go:172] (0xc00255ed20) (3) Data frame sent
I0111 12:10:49.115642       9 log.go:172] (0xc00094be40) Data frame received for 1
I0111 12:10:49.115724       9 log.go:172] (0xc00094be40) (0xc00255ed20) Stream removed, broadcasting: 3
I0111 12:10:49.115763       9 log.go:172] (0xc0023ce780) (1) Data frame handling
I0111 12:10:49.115804       9 log.go:172] (0xc0023ce780) (1) Data frame sent
I0111 12:10:49.115816       9 log.go:172] (0xc00094be40) (0xc0023ce780) Stream removed, broadcasting: 1
I0111 12:10:49.115844       9 log.go:172] (0xc00094be40) (0xc00235a500) Stream removed, broadcasting: 5
I0111 12:10:49.115888       9 log.go:172] (0xc00094be40) Go away received
I0111 12:10:49.115942       9 log.go:172] (0xc00094be40) (0xc0023ce780) Stream removed, broadcasting: 1
I0111 12:10:49.115958       9 log.go:172] (0xc00094be40) (0xc00255ed20) Stream removed, broadcasting: 3
I0111 12:10:49.115965       9 log.go:172] (0xc00094be40) (0xc00235a500) Stream removed, broadcasting: 5
Jan 11 12:10:49.115: INFO: Exec stderr: ""
Jan 11 12:10:49.116: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-4tj8k PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 11 12:10:49.116: INFO: >>> kubeConfig: /root/.kube/config
I0111 12:10:49.268350       9 log.go:172] (0xc0029f3340) (0xc00255f0e0) Create stream
I0111 12:10:49.268393       9 log.go:172] (0xc0029f3340) (0xc00255f0e0) Stream added, broadcasting: 1
I0111 12:10:49.274613       9 log.go:172] (0xc0029f3340) Reply frame received for 1
I0111 12:10:49.274646       9 log.go:172] (0xc0029f3340) (0xc00255f180) Create stream
I0111 12:10:49.274655       9 log.go:172] (0xc0029f3340) (0xc00255f180) Stream added, broadcasting: 3
I0111 12:10:49.275668       9 log.go:172] (0xc0029f3340) Reply frame received for 3
I0111 12:10:49.275692       9 log.go:172] (0xc0029f3340) (0xc0023ce820) Create stream
I0111 12:10:49.275701       9 log.go:172] (0xc0029f3340) (0xc0023ce820) Stream added, broadcasting: 5
I0111 12:10:49.278044       9 log.go:172] (0xc0029f3340) Reply frame received for 5
I0111 12:10:49.403532       9 log.go:172] (0xc0029f3340) Data frame received for 3
I0111 12:10:49.403643       9 log.go:172] (0xc00255f180) (3) Data frame handling
I0111 12:10:49.403686       9 log.go:172] (0xc00255f180) (3) Data frame sent
I0111 12:10:49.535500       9 log.go:172] (0xc0029f3340) Data frame received for 1
I0111 12:10:49.535541       9 log.go:172] (0xc0029f3340) (0xc00255f180) Stream removed, broadcasting: 3
I0111 12:10:49.535569       9 log.go:172] (0xc00255f0e0) (1) Data frame handling
I0111 12:10:49.535590       9 log.go:172] (0xc00255f0e0) (1) Data frame sent
I0111 12:10:49.535603       9 log.go:172] (0xc0029f3340) (0xc00255f0e0) Stream removed, broadcasting: 1
I0111 12:10:49.535756       9 log.go:172] (0xc0029f3340) (0xc0023ce820) Stream removed, broadcasting: 5
I0111 12:10:49.535802       9 log.go:172] (0xc0029f3340) (0xc00255f0e0) Stream removed, broadcasting: 1
I0111 12:10:49.535831       9 log.go:172] (0xc0029f3340) (0xc00255f180) Stream removed, broadcasting: 3
I0111 12:10:49.535858       9 log.go:172] (0xc0029f3340) (0xc0023ce820) Stream removed, broadcasting: 5
Jan 11 12:10:49.535: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0111 12:10:49.535964       9 log.go:172] (0xc0029f3340) Go away received
Jan 11 12:10:49.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-4tj8k" for this suite.
Jan 11 12:11:45.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:11:45.736: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-4tj8k, resource: bindings, ignored listing per whitelist
Jan 11 12:11:45.762: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-4tj8k deletion completed in 56.209450332s

• [SLOW TEST:84.531 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:11:45.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-89608f03-346b-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 11 12:11:45.998: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-896b0e04-346b-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-hx442" to be "success or failure"
Jan 11 12:11:46.011: INFO: Pod "pod-projected-configmaps-896b0e04-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.759381ms
Jan 11 12:11:48.029: INFO: Pod "pod-projected-configmaps-896b0e04-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03069838s
Jan 11 12:11:50.043: INFO: Pod "pod-projected-configmaps-896b0e04-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045219387s
Jan 11 12:11:52.063: INFO: Pod "pod-projected-configmaps-896b0e04-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064410247s
Jan 11 12:11:54.093: INFO: Pod "pod-projected-configmaps-896b0e04-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094795585s
Jan 11 12:11:56.111: INFO: Pod "pod-projected-configmaps-896b0e04-346b-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112442463s
STEP: Saw pod success
Jan 11 12:11:56.111: INFO: Pod "pod-projected-configmaps-896b0e04-346b-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:11:56.119: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-896b0e04-346b-11ea-b0bd-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 11 12:11:56.210: INFO: Waiting for pod pod-projected-configmaps-896b0e04-346b-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:11:56.220: INFO: Pod pod-projected-configmaps-896b0e04-346b-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:11:56.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hx442" for this suite.
Jan 11 12:12:02.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:12:02.387: INFO: namespace: e2e-tests-projected-hx442, resource: bindings, ignored listing per whitelist
Jan 11 12:12:02.875: INFO: namespace e2e-tests-projected-hx442 deletion completed in 6.649750079s

• [SLOW TEST:17.113 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:12:02.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-93922911-346b-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 11 12:12:03.047: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-93933e29-346b-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-8269s" to be "success or failure"
Jan 11 12:12:03.118: INFO: Pod "pod-projected-secrets-93933e29-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 70.860961ms
Jan 11 12:12:05.332: INFO: Pod "pod-projected-secrets-93933e29-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284878787s
Jan 11 12:12:07.346: INFO: Pod "pod-projected-secrets-93933e29-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299065979s
Jan 11 12:12:09.516: INFO: Pod "pod-projected-secrets-93933e29-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.469255774s
Jan 11 12:12:11.534: INFO: Pod "pod-projected-secrets-93933e29-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.487216191s
Jan 11 12:12:13.548: INFO: Pod "pod-projected-secrets-93933e29-346b-11ea-b0bd-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.501321902s
Jan 11 12:12:16.127: INFO: Pod "pod-projected-secrets-93933e29-346b-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.079674428s
STEP: Saw pod success
Jan 11 12:12:16.127: INFO: Pod "pod-projected-secrets-93933e29-346b-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:12:16.153: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-93933e29-346b-11ea-b0bd-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 11 12:12:16.226: INFO: Waiting for pod pod-projected-secrets-93933e29-346b-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:12:16.237: INFO: Pod pod-projected-secrets-93933e29-346b-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:12:16.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8269s" for this suite.
Jan 11 12:12:22.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:12:22.482: INFO: namespace: e2e-tests-projected-8269s, resource: bindings, ignored listing per whitelist
Jan 11 12:12:22.482: INFO: namespace e2e-tests-projected-8269s deletion completed in 6.238554512s

• [SLOW TEST:19.606 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:12:22.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-9f5695ae-346b-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 11 12:12:22.871: INFO: Waiting up to 5m0s for pod "pod-secrets-9f57468d-346b-11ea-b0bd-0242ac110005" in namespace "e2e-tests-secrets-2d88q" to be "success or failure"
Jan 11 12:12:22.884: INFO: Pod "pod-secrets-9f57468d-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.816147ms
Jan 11 12:12:24.958: INFO: Pod "pod-secrets-9f57468d-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086472873s
Jan 11 12:12:26.976: INFO: Pod "pod-secrets-9f57468d-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104545552s
Jan 11 12:12:29.376: INFO: Pod "pod-secrets-9f57468d-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.505267921s
Jan 11 12:12:31.387: INFO: Pod "pod-secrets-9f57468d-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.515655524s
Jan 11 12:12:33.398: INFO: Pod "pod-secrets-9f57468d-346b-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.526985932s
STEP: Saw pod success
Jan 11 12:12:33.398: INFO: Pod "pod-secrets-9f57468d-346b-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:12:33.402: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-9f57468d-346b-11ea-b0bd-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 11 12:12:33.554: INFO: Waiting for pod pod-secrets-9f57468d-346b-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:12:33.567: INFO: Pod pod-secrets-9f57468d-346b-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:12:33.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2d88q" for this suite.
Jan 11 12:12:39.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:12:39.691: INFO: namespace: e2e-tests-secrets-2d88q, resource: bindings, ignored listing per whitelist
Jan 11 12:12:39.751: INFO: namespace e2e-tests-secrets-2d88q deletion completed in 6.175452289s

• [SLOW TEST:17.269 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:12:39.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 11 12:12:39.935: INFO: Waiting up to 5m0s for pod "pod-a987bb25-346b-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-gnb8p" to be "success or failure"
Jan 11 12:12:39.948: INFO: Pod "pod-a987bb25-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.563522ms
Jan 11 12:12:42.138: INFO: Pod "pod-a987bb25-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203437819s
Jan 11 12:12:44.155: INFO: Pod "pod-a987bb25-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220052794s
Jan 11 12:12:46.180: INFO: Pod "pod-a987bb25-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.244889869s
Jan 11 12:12:48.196: INFO: Pod "pod-a987bb25-346b-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26122807s
Jan 11 12:12:50.211: INFO: Pod "pod-a987bb25-346b-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.276729075s
STEP: Saw pod success
Jan 11 12:12:50.212: INFO: Pod "pod-a987bb25-346b-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:12:50.222: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a987bb25-346b-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 12:12:50.461: INFO: Waiting for pod pod-a987bb25-346b-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:12:50.466: INFO: Pod pod-a987bb25-346b-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:12:50.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gnb8p" for this suite.
Jan 11 12:12:56.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:12:56.737: INFO: namespace: e2e-tests-emptydir-gnb8p, resource: bindings, ignored listing per whitelist
Jan 11 12:12:57.024: INFO: namespace e2e-tests-emptydir-gnb8p deletion completed in 6.551205303s

• [SLOW TEST:17.273 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:12:57.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 11 12:12:58.959: INFO: Pod name wrapped-volume-race-b4e3bdc5-346b-11ea-b0bd-0242ac110005: Found 0 pods out of 5
Jan 11 12:13:04.023: INFO: Pod name wrapped-volume-race-b4e3bdc5-346b-11ea-b0bd-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b4e3bdc5-346b-11ea-b0bd-0242ac110005 in namespace e2e-tests-emptydir-wrapper-zpc7x, will wait for the garbage collector to delete the pods
Jan 11 12:15:18.180: INFO: Deleting ReplicationController wrapped-volume-race-b4e3bdc5-346b-11ea-b0bd-0242ac110005 took: 32.901209ms
Jan 11 12:15:18.581: INFO: Terminating ReplicationController wrapped-volume-race-b4e3bdc5-346b-11ea-b0bd-0242ac110005 pods took: 400.582834ms
STEP: Creating RC which spawns configmap-volume pods
Jan 11 12:16:04.119: INFO: Pod name wrapped-volume-race-23320a3f-346c-11ea-b0bd-0242ac110005: Found 0 pods out of 5
Jan 11 12:16:09.150: INFO: Pod name wrapped-volume-race-23320a3f-346c-11ea-b0bd-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-23320a3f-346c-11ea-b0bd-0242ac110005 in namespace e2e-tests-emptydir-wrapper-zpc7x, will wait for the garbage collector to delete the pods
Jan 11 12:17:53.301: INFO: Deleting ReplicationController wrapped-volume-race-23320a3f-346c-11ea-b0bd-0242ac110005 took: 20.055484ms
Jan 11 12:17:53.502: INFO: Terminating ReplicationController wrapped-volume-race-23320a3f-346c-11ea-b0bd-0242ac110005 pods took: 200.314498ms
STEP: Creating RC which spawns configmap-volume pods
Jan 11 12:18:42.929: INFO: Pod name wrapped-volume-race-81dc3441-346c-11ea-b0bd-0242ac110005: Found 0 pods out of 5
Jan 11 12:18:47.947: INFO: Pod name wrapped-volume-race-81dc3441-346c-11ea-b0bd-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-81dc3441-346c-11ea-b0bd-0242ac110005 in namespace e2e-tests-emptydir-wrapper-zpc7x, will wait for the garbage collector to delete the pods
Jan 11 12:20:30.124: INFO: Deleting ReplicationController wrapped-volume-race-81dc3441-346c-11ea-b0bd-0242ac110005 took: 38.656682ms
Jan 11 12:20:30.625: INFO: Terminating ReplicationController wrapped-volume-race-81dc3441-346c-11ea-b0bd-0242ac110005 pods took: 501.295367ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:21:16.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-zpc7x" for this suite.
Jan 11 12:21:26.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:21:26.517: INFO: namespace: e2e-tests-emptydir-wrapper-zpc7x, resource: bindings, ignored listing per whitelist
Jan 11 12:21:26.758: INFO: namespace e2e-tests-emptydir-wrapper-zpc7x deletion completed in 10.415774669s

• [SLOW TEST:509.733 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:21:26.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-e3c51a21-346c-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 11 12:21:27.180: INFO: Waiting up to 5m0s for pod "pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005" in namespace "e2e-tests-configmap-fvl2f" to be "success or failure"
Jan 11 12:21:27.196: INFO: Pod "pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.787046ms
Jan 11 12:21:30.051: INFO: Pod "pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.870200095s
Jan 11 12:21:32.712: INFO: Pod "pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.531125795s
Jan 11 12:21:34.732: INFO: Pod "pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.55191948s
Jan 11 12:21:36.880: INFO: Pod "pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.700045518s
Jan 11 12:21:39.455: INFO: Pod "pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.274623189s
Jan 11 12:21:41.467: INFO: Pod "pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.28704838s
Jan 11 12:21:43.480: INFO: Pod "pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.299976499s
STEP: Saw pod success
Jan 11 12:21:43.480: INFO: Pod "pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:21:43.486: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 11 12:21:44.801: INFO: Waiting for pod pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:21:44.840: INFO: Pod pod-configmaps-e3c6ef28-346c-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:21:44.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fvl2f" for this suite.
Jan 11 12:21:51.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:21:51.152: INFO: namespace: e2e-tests-configmap-fvl2f, resource: bindings, ignored listing per whitelist
Jan 11 12:21:51.184: INFO: namespace e2e-tests-configmap-fvl2f deletion completed in 6.330810944s

• [SLOW TEST:24.425 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:21:51.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 11 12:21:51.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f5qml'
Jan 11 12:21:53.975: INFO: stderr: ""
Jan 11 12:21:53.975: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 11 12:21:54.994: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:21:54.994: INFO: Found 0 / 1
Jan 11 12:21:56.034: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:21:56.034: INFO: Found 0 / 1
Jan 11 12:21:56.986: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:21:56.986: INFO: Found 0 / 1
Jan 11 12:21:58.003: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:21:58.003: INFO: Found 0 / 1
Jan 11 12:21:58.991: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:21:58.991: INFO: Found 0 / 1
Jan 11 12:22:00.171: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:00.171: INFO: Found 0 / 1
Jan 11 12:22:01.517: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:01.518: INFO: Found 0 / 1
Jan 11 12:22:02.111: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:02.111: INFO: Found 0 / 1
Jan 11 12:22:03.012: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:03.012: INFO: Found 0 / 1
Jan 11 12:22:04.003: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:04.003: INFO: Found 1 / 1
Jan 11 12:22:04.003: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 11 12:22:04.025: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:04.025: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 11 12:22:04.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-46l7d --namespace=e2e-tests-kubectl-f5qml -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 11 12:22:04.178: INFO: stderr: ""
Jan 11 12:22:04.178: INFO: stdout: "pod/redis-master-46l7d patched\n"
STEP: checking annotations
Jan 11 12:22:04.192: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:04.192: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:22:04.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-f5qml" for this suite.
Jan 11 12:22:30.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:22:30.426: INFO: namespace: e2e-tests-kubectl-f5qml, resource: bindings, ignored listing per whitelist
Jan 11 12:22:30.805: INFO: namespace e2e-tests-kubectl-f5qml deletion completed in 26.607991514s

• [SLOW TEST:39.621 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:22:30.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-09ec86d1-346d-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 11 12:22:31.132: INFO: Waiting up to 5m0s for pod "pod-configmaps-09ef112c-346d-11ea-b0bd-0242ac110005" in namespace "e2e-tests-configmap-4d5fq" to be "success or failure"
Jan 11 12:22:31.146: INFO: Pod "pod-configmaps-09ef112c-346d-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.794968ms
Jan 11 12:22:33.175: INFO: Pod "pod-configmaps-09ef112c-346d-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043331843s
Jan 11 12:22:35.195: INFO: Pod "pod-configmaps-09ef112c-346d-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063131928s
Jan 11 12:22:37.508: INFO: Pod "pod-configmaps-09ef112c-346d-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.376216715s
Jan 11 12:22:39.526: INFO: Pod "pod-configmaps-09ef112c-346d-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.39387182s
Jan 11 12:22:41.541: INFO: Pod "pod-configmaps-09ef112c-346d-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.408765489s
STEP: Saw pod success
Jan 11 12:22:41.541: INFO: Pod "pod-configmaps-09ef112c-346d-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:22:41.548: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-09ef112c-346d-11ea-b0bd-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 11 12:22:41.641: INFO: Waiting for pod pod-configmaps-09ef112c-346d-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:22:41.663: INFO: Pod pod-configmaps-09ef112c-346d-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:22:41.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-4d5fq" for this suite.
Jan 11 12:22:48.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:22:49.056: INFO: namespace: e2e-tests-configmap-4d5fq, resource: bindings, ignored listing per whitelist
Jan 11 12:22:49.056: INFO: namespace e2e-tests-configmap-4d5fq deletion completed in 6.597932255s

• [SLOW TEST:18.250 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:22:49.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 11 12:22:49.260: INFO: namespace e2e-tests-kubectl-z9bv9
Jan 11 12:22:49.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-z9bv9'
Jan 11 12:22:49.603: INFO: stderr: ""
Jan 11 12:22:49.603: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 11 12:22:50.634: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:50.634: INFO: Found 0 / 1
Jan 11 12:22:51.618: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:51.618: INFO: Found 0 / 1
Jan 11 12:22:52.621: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:52.621: INFO: Found 0 / 1
Jan 11 12:22:53.620: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:53.620: INFO: Found 0 / 1
Jan 11 12:22:55.480: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:55.481: INFO: Found 0 / 1
Jan 11 12:22:55.744: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:55.744: INFO: Found 0 / 1
Jan 11 12:22:56.704: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:56.704: INFO: Found 0 / 1
Jan 11 12:22:57.664: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:57.664: INFO: Found 0 / 1
Jan 11 12:22:58.641: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:58.641: INFO: Found 0 / 1
Jan 11 12:22:59.708: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:59.708: INFO: Found 1 / 1
Jan 11 12:22:59.708: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 11 12:22:59.716: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:22:59.716: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 11 12:22:59.716: INFO: wait on redis-master startup in e2e-tests-kubectl-z9bv9 
Jan 11 12:22:59.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tbkw6 redis-master --namespace=e2e-tests-kubectl-z9bv9'
Jan 11 12:22:59.877: INFO: stderr: ""
Jan 11 12:22:59.877: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 11 Jan 12:22:57.461 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Jan 12:22:57.461 # Server started, Redis version 3.2.12\n1:M 11 Jan 12:22:57.461 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Jan 12:22:57.461 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 11 12:22:59.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-z9bv9'
Jan 11 12:23:00.083: INFO: stderr: ""
Jan 11 12:23:00.083: INFO: stdout: "service/rm2 exposed\n"
Jan 11 12:23:00.100: INFO: Service rm2 in namespace e2e-tests-kubectl-z9bv9 found.
STEP: exposing service
Jan 11 12:23:02.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-z9bv9'
Jan 11 12:23:02.376: INFO: stderr: ""
Jan 11 12:23:02.376: INFO: stdout: "service/rm3 exposed\n"
Jan 11 12:23:02.390: INFO: Service rm3 in namespace e2e-tests-kubectl-z9bv9 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:23:04.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-z9bv9" for this suite.
Jan 11 12:23:30.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:23:30.758: INFO: namespace: e2e-tests-kubectl-z9bv9, resource: bindings, ignored listing per whitelist
Jan 11 12:23:30.805: INFO: namespace e2e-tests-kubectl-z9bv9 deletion completed in 26.389858224s

• [SLOW TEST:41.749 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:23:30.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-2daec609-346d-11ea-b0bd-0242ac110005
Jan 11 12:23:31.104: INFO: Pod name my-hostname-basic-2daec609-346d-11ea-b0bd-0242ac110005: Found 0 pods out of 1
Jan 11 12:23:36.362: INFO: Pod name my-hostname-basic-2daec609-346d-11ea-b0bd-0242ac110005: Found 1 pods out of 1
Jan 11 12:23:36.362: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2daec609-346d-11ea-b0bd-0242ac110005" are running
Jan 11 12:23:41.177: INFO: Pod "my-hostname-basic-2daec609-346d-11ea-b0bd-0242ac110005-zsmpf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 12:23:31 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 12:23:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2daec609-346d-11ea-b0bd-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 12:23:31 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2daec609-346d-11ea-b0bd-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-11 12:23:31 +0000 UTC Reason: Message:}])
Jan 11 12:23:41.177: INFO: Trying to dial the pod
Jan 11 12:23:46.221: INFO: Controller my-hostname-basic-2daec609-346d-11ea-b0bd-0242ac110005: Got expected result from replica 1 [my-hostname-basic-2daec609-346d-11ea-b0bd-0242ac110005-zsmpf]: "my-hostname-basic-2daec609-346d-11ea-b0bd-0242ac110005-zsmpf", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:23:46.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-vgmhn" for this suite.
Jan 11 12:23:54.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:23:54.300: INFO: namespace: e2e-tests-replication-controller-vgmhn, resource: bindings, ignored listing per whitelist
Jan 11 12:23:54.373: INFO: namespace e2e-tests-replication-controller-vgmhn deletion completed in 8.144327295s

• [SLOW TEST:23.568 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:23:54.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 11 12:23:55.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-b8x2d'
Jan 11 12:23:55.947: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 11 12:23:55.948: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 11 12:23:58.067: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-f5fg6]
Jan 11 12:23:58.067: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-f5fg6" in namespace "e2e-tests-kubectl-b8x2d" to be "running and ready"
Jan 11 12:23:58.087: INFO: Pod "e2e-test-nginx-rc-f5fg6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.863678ms
Jan 11 12:24:00.116: INFO: Pod "e2e-test-nginx-rc-f5fg6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049182603s
Jan 11 12:24:02.137: INFO: Pod "e2e-test-nginx-rc-f5fg6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06950481s
Jan 11 12:24:04.164: INFO: Pod "e2e-test-nginx-rc-f5fg6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096900138s
Jan 11 12:24:06.178: INFO: Pod "e2e-test-nginx-rc-f5fg6": Phase="Running", Reason="", readiness=true. Elapsed: 8.110853157s
Jan 11 12:24:06.178: INFO: Pod "e2e-test-nginx-rc-f5fg6" satisfied condition "running and ready"
Jan 11 12:24:06.178: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-f5fg6]
Jan 11 12:24:06.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-b8x2d'
Jan 11 12:24:06.422: INFO: stderr: ""
Jan 11 12:24:06.422: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan 11 12:24:06.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-b8x2d'
Jan 11 12:24:06.653: INFO: stderr: ""
Jan 11 12:24:06.653: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:24:06.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-b8x2d" for this suite.
Jan 11 12:24:30.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:24:30.996: INFO: namespace: e2e-tests-kubectl-b8x2d, resource: bindings, ignored listing per whitelist
Jan 11 12:24:31.031: INFO: namespace e2e-tests-kubectl-b8x2d deletion completed in 24.363343543s

• [SLOW TEST:36.658 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:24:31.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-5191e2df-346d-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 11 12:24:31.382: INFO: Waiting up to 5m0s for pod "pod-secrets-519c57e1-346d-11ea-b0bd-0242ac110005" in namespace "e2e-tests-secrets-sbjxg" to be "success or failure"
Jan 11 12:24:31.387: INFO: Pod "pod-secrets-519c57e1-346d-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.96948ms
Jan 11 12:24:33.400: INFO: Pod "pod-secrets-519c57e1-346d-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017978859s
Jan 11 12:24:35.418: INFO: Pod "pod-secrets-519c57e1-346d-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036610826s
Jan 11 12:24:37.650: INFO: Pod "pod-secrets-519c57e1-346d-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268140924s
Jan 11 12:24:39.668: INFO: Pod "pod-secrets-519c57e1-346d-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.286218207s
Jan 11 12:24:41.703: INFO: Pod "pod-secrets-519c57e1-346d-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.321603982s
STEP: Saw pod success
Jan 11 12:24:41.704: INFO: Pod "pod-secrets-519c57e1-346d-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:24:41.715: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-519c57e1-346d-11ea-b0bd-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 11 12:24:42.982: INFO: Waiting for pod pod-secrets-519c57e1-346d-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:24:42.996: INFO: Pod pod-secrets-519c57e1-346d-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:24:42.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-sbjxg" for this suite.
Jan 11 12:24:49.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:24:49.159: INFO: namespace: e2e-tests-secrets-sbjxg, resource: bindings, ignored listing per whitelist
Jan 11 12:24:49.180: INFO: namespace e2e-tests-secrets-sbjxg deletion completed in 6.173401964s

• [SLOW TEST:18.148 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:24:49.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan 11 12:24:49.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 11 12:24:49.529: INFO: stderr: ""
Jan 11 12:24:49.529: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:24:49.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-r8czs" for this suite.
Jan 11 12:24:55.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:24:55.670: INFO: namespace: e2e-tests-kubectl-r8czs, resource: bindings, ignored listing per whitelist
Jan 11 12:24:55.797: INFO: namespace e2e-tests-kubectl-r8czs deletion completed in 6.209609242s

• [SLOW TEST:6.617 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:24:55.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:25:06.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-hts6h" for this suite.
Jan 11 12:25:48.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:25:48.284: INFO: namespace: e2e-tests-kubelet-test-hts6h, resource: bindings, ignored listing per whitelist
Jan 11 12:25:48.328: INFO: namespace e2e-tests-kubelet-test-hts6h deletion completed in 42.178023052s

• [SLOW TEST:52.531 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:25:48.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:26:00.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-9j8ft" for this suite.
Jan 11 12:26:06.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:26:06.879: INFO: namespace: e2e-tests-emptydir-wrapper-9j8ft, resource: bindings, ignored listing per whitelist
Jan 11 12:26:06.908: INFO: namespace e2e-tests-emptydir-wrapper-9j8ft deletion completed in 6.217252668s

• [SLOW TEST:18.579 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:26:06.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:27:09.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-4xmcj" for this suite.
Jan 11 12:27:18.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:27:18.180: INFO: namespace: e2e-tests-container-runtime-4xmcj, resource: bindings, ignored listing per whitelist
Jan 11 12:27:18.292: INFO: namespace e2e-tests-container-runtime-4xmcj deletion completed in 8.242541322s

• [SLOW TEST:71.384 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:27:18.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-b544ee8c-346d-11ea-b0bd-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-b544ee8c-346d-11ea-b0bd-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:28:47.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6bnct" for this suite.
Jan 11 12:29:11.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:29:11.347: INFO: namespace: e2e-tests-configmap-6bnct, resource: bindings, ignored listing per whitelist
Jan 11 12:29:11.460: INFO: namespace e2e-tests-configmap-6bnct deletion completed in 24.235955809s

• [SLOW TEST:113.167 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:29:11.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan 11 12:29:11.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-86khd'
Jan 11 12:29:11.899: INFO: stderr: ""
Jan 11 12:29:11.900: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan 11 12:29:13.332: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:29:13.332: INFO: Found 0 / 1
Jan 11 12:29:14.110: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:29:14.110: INFO: Found 0 / 1
Jan 11 12:29:15.235: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:29:15.235: INFO: Found 0 / 1
Jan 11 12:29:15.919: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:29:15.919: INFO: Found 0 / 1
Jan 11 12:29:16.916: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:29:16.916: INFO: Found 0 / 1
Jan 11 12:29:18.519: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:29:18.520: INFO: Found 0 / 1
Jan 11 12:29:19.360: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:29:19.360: INFO: Found 0 / 1
Jan 11 12:29:19.921: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:29:19.921: INFO: Found 0 / 1
Jan 11 12:29:20.914: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:29:20.914: INFO: Found 0 / 1
Jan 11 12:29:21.928: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:29:21.928: INFO: Found 1 / 1
Jan 11 12:29:21.928: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 11 12:29:21.940: INFO: Selector matched 1 pods for map[app:redis]
Jan 11 12:29:21.940: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 11 12:29:21.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ghl9l redis-master --namespace=e2e-tests-kubectl-86khd'
Jan 11 12:29:22.107: INFO: stderr: ""
Jan 11 12:29:22.107: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 11 Jan 12:29:21.022 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Jan 12:29:21.022 # Server started, Redis version 3.2.12\n1:M 11 Jan 12:29:21.023 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Jan 12:29:21.023 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 11 12:29:22.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ghl9l redis-master --namespace=e2e-tests-kubectl-86khd --tail=1'
Jan 11 12:29:22.304: INFO: stderr: ""
Jan 11 12:29:22.305: INFO: stdout: "1:M 11 Jan 12:29:21.023 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 11 12:29:22.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ghl9l redis-master --namespace=e2e-tests-kubectl-86khd --limit-bytes=1'
Jan 11 12:29:22.429: INFO: stderr: ""
Jan 11 12:29:22.430: INFO: stdout: " "
STEP: exposing timestamps
Jan 11 12:29:22.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ghl9l redis-master --namespace=e2e-tests-kubectl-86khd --tail=1 --timestamps'
Jan 11 12:29:22.663: INFO: stderr: ""
Jan 11 12:29:22.663: INFO: stdout: "2020-01-11T12:29:21.023369288Z 1:M 11 Jan 12:29:21.023 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 11 12:29:25.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ghl9l redis-master --namespace=e2e-tests-kubectl-86khd --since=1s'
Jan 11 12:29:25.389: INFO: stderr: ""
Jan 11 12:29:25.389: INFO: stdout: ""
Jan 11 12:29:25.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ghl9l redis-master --namespace=e2e-tests-kubectl-86khd --since=24h'
Jan 11 12:29:25.562: INFO: stderr: ""
Jan 11 12:29:25.562: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 11 Jan 12:29:21.022 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Jan 12:29:21.022 # Server started, Redis version 3.2.12\n1:M 11 Jan 12:29:21.023 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Jan 12:29:21.023 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan 11 12:29:25.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-86khd'
Jan 11 12:29:25.696: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 11 12:29:25.696: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 11 12:29:25.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-86khd'
Jan 11 12:29:25.816: INFO: stderr: "No resources found.\n"
Jan 11 12:29:25.816: INFO: stdout: ""
Jan 11 12:29:25.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-86khd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 11 12:29:25.935: INFO: stderr: ""
Jan 11 12:29:25.935: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:29:25.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-86khd" for this suite.
Jan 11 12:29:31.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:29:32.128: INFO: namespace: e2e-tests-kubectl-86khd, resource: bindings, ignored listing per whitelist
Jan 11 12:29:32.167: INFO: namespace e2e-tests-kubectl-86khd deletion completed in 6.221785169s

• [SLOW TEST:20.706 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:29:32.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 12:29:32.372: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04fbb2df-346e-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-rvwzb" to be "success or failure"
Jan 11 12:29:32.379: INFO: Pod "downwardapi-volume-04fbb2df-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.474983ms
Jan 11 12:29:34.398: INFO: Pod "downwardapi-volume-04fbb2df-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025807762s
Jan 11 12:29:36.416: INFO: Pod "downwardapi-volume-04fbb2df-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043672423s
Jan 11 12:29:38.716: INFO: Pod "downwardapi-volume-04fbb2df-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.344308174s
Jan 11 12:29:40.780: INFO: Pod "downwardapi-volume-04fbb2df-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.40766432s
Jan 11 12:29:42.794: INFO: Pod "downwardapi-volume-04fbb2df-346e-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.421514949s
STEP: Saw pod success
Jan 11 12:29:42.794: INFO: Pod "downwardapi-volume-04fbb2df-346e-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:29:42.809: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-04fbb2df-346e-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 12:29:44.021: INFO: Waiting for pod downwardapi-volume-04fbb2df-346e-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:29:44.500: INFO: Pod downwardapi-volume-04fbb2df-346e-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:29:44.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rvwzb" for this suite.
Jan 11 12:29:50.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:29:50.756: INFO: namespace: e2e-tests-projected-rvwzb, resource: bindings, ignored listing per whitelist
Jan 11 12:29:51.038: INFO: namespace e2e-tests-projected-rvwzb deletion completed in 6.518820852s

• [SLOW TEST:18.870 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:29:51.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan 11 12:29:51.402: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-5c89b" to be "success or failure"
Jan 11 12:29:51.416: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.868525ms
Jan 11 12:29:53.437: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03458692s
Jan 11 12:29:55.449: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04712284s
Jan 11 12:29:57.784: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382282365s
Jan 11 12:30:00.236: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.83356405s
Jan 11 12:30:02.249: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.846610303s
Jan 11 12:30:04.269: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.866991174s
STEP: Saw pod success
Jan 11 12:30:04.269: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 11 12:30:04.275: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 11 12:30:05.468: INFO: Waiting for pod pod-host-path-test to disappear
Jan 11 12:30:05.483: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:30:05.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-5c89b" for this suite.
Jan 11 12:30:11.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:30:11.691: INFO: namespace: e2e-tests-hostpath-5c89b, resource: bindings, ignored listing per whitelist
Jan 11 12:30:11.724: INFO: namespace e2e-tests-hostpath-5c89b deletion completed in 6.229555285s

• [SLOW TEST:20.685 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:30:11.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 11 12:30:11.990: INFO: Waiting up to 5m0s for pod "downward-api-1ca37237-346e-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-5d6fm" to be "success or failure"
Jan 11 12:30:12.097: INFO: Pod "downward-api-1ca37237-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 107.497823ms
Jan 11 12:30:14.130: INFO: Pod "downward-api-1ca37237-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139987517s
Jan 11 12:30:16.155: INFO: Pod "downward-api-1ca37237-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165543488s
Jan 11 12:30:18.179: INFO: Pod "downward-api-1ca37237-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189186711s
Jan 11 12:30:20.270: INFO: Pod "downward-api-1ca37237-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.279931006s
Jan 11 12:30:22.279: INFO: Pod "downward-api-1ca37237-346e-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.288858987s
STEP: Saw pod success
Jan 11 12:30:22.279: INFO: Pod "downward-api-1ca37237-346e-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:30:22.830: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-1ca37237-346e-11ea-b0bd-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 11 12:30:23.127: INFO: Waiting for pod downward-api-1ca37237-346e-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:30:23.171: INFO: Pod downward-api-1ca37237-346e-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:30:23.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5d6fm" for this suite.
Jan 11 12:30:29.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:30:29.301: INFO: namespace: e2e-tests-downward-api-5d6fm, resource: bindings, ignored listing per whitelist
Jan 11 12:30:29.332: INFO: namespace e2e-tests-downward-api-5d6fm deletion completed in 6.154057617s

• [SLOW TEST:17.608 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:30:29.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 11 12:30:29.521: INFO: Waiting up to 5m0s for pod "downward-api-27171b61-346e-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-sgz4f" to be "success or failure"
Jan 11 12:30:29.534: INFO: Pod "downward-api-27171b61-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.107567ms
Jan 11 12:30:31.552: INFO: Pod "downward-api-27171b61-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030229043s
Jan 11 12:30:33.576: INFO: Pod "downward-api-27171b61-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054731937s
Jan 11 12:30:35.676: INFO: Pod "downward-api-27171b61-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154603867s
Jan 11 12:30:38.265: INFO: Pod "downward-api-27171b61-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743281423s
Jan 11 12:30:40.561: INFO: Pod "downward-api-27171b61-346e-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.040047062s
STEP: Saw pod success
Jan 11 12:30:40.562: INFO: Pod "downward-api-27171b61-346e-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:30:40.583: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-27171b61-346e-11ea-b0bd-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 11 12:30:40.925: INFO: Waiting for pod downward-api-27171b61-346e-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:30:40.939: INFO: Pod downward-api-27171b61-346e-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:30:40.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sgz4f" for this suite.
Jan 11 12:30:47.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:30:47.114: INFO: namespace: e2e-tests-downward-api-sgz4f, resource: bindings, ignored listing per whitelist
Jan 11 12:30:47.265: INFO: namespace e2e-tests-downward-api-sgz4f deletion completed in 6.226793342s

• [SLOW TEST:17.933 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:30:47.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 12:30:47.544: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31d22781-346e-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-85fj8" to be "success or failure"
Jan 11 12:30:47.564: INFO: Pod "downwardapi-volume-31d22781-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.878636ms
Jan 11 12:30:49.578: INFO: Pod "downwardapi-volume-31d22781-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033303365s
Jan 11 12:30:51.595: INFO: Pod "downwardapi-volume-31d22781-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050752682s
Jan 11 12:30:53.716: INFO: Pod "downwardapi-volume-31d22781-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171495032s
Jan 11 12:30:55.732: INFO: Pod "downwardapi-volume-31d22781-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187950982s
Jan 11 12:30:57.749: INFO: Pod "downwardapi-volume-31d22781-346e-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.20495761s
STEP: Saw pod success
Jan 11 12:30:57.750: INFO: Pod "downwardapi-volume-31d22781-346e-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:30:57.759: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-31d22781-346e-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 12:30:58.218: INFO: Waiting for pod downwardapi-volume-31d22781-346e-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:30:58.618: INFO: Pod downwardapi-volume-31d22781-346e-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:30:58.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-85fj8" for this suite.
Jan 11 12:31:04.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:31:04.865: INFO: namespace: e2e-tests-projected-85fj8, resource: bindings, ignored listing per whitelist
Jan 11 12:31:04.873: INFO: namespace e2e-tests-projected-85fj8 deletion completed in 6.222852226s

• [SLOW TEST:17.608 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:31:04.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan 11 12:31:05.089: INFO: Waiting up to 5m0s for pod "var-expansion-3c498d2c-346e-11ea-b0bd-0242ac110005" in namespace "e2e-tests-var-expansion-v5r9f" to be "success or failure"
Jan 11 12:31:05.099: INFO: Pod "var-expansion-3c498d2c-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.09359ms
Jan 11 12:31:07.292: INFO: Pod "var-expansion-3c498d2c-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202472333s
Jan 11 12:31:09.316: INFO: Pod "var-expansion-3c498d2c-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.226326043s
Jan 11 12:31:11.495: INFO: Pod "var-expansion-3c498d2c-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.406214754s
Jan 11 12:31:13.505: INFO: Pod "var-expansion-3c498d2c-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.415978562s
Jan 11 12:31:15.525: INFO: Pod "var-expansion-3c498d2c-346e-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.435401386s
STEP: Saw pod success
Jan 11 12:31:15.525: INFO: Pod "var-expansion-3c498d2c-346e-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:31:15.535: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-3c498d2c-346e-11ea-b0bd-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 11 12:31:15.745: INFO: Waiting for pod var-expansion-3c498d2c-346e-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:31:15.762: INFO: Pod var-expansion-3c498d2c-346e-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:31:15.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-v5r9f" for this suite.
Jan 11 12:31:21.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:31:21.974: INFO: namespace: e2e-tests-var-expansion-v5r9f, resource: bindings, ignored listing per whitelist
Jan 11 12:31:22.105: INFO: namespace e2e-tests-var-expansion-v5r9f deletion completed in 6.332474039s

• [SLOW TEST:17.231 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:31:22.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 11 12:31:22.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan 11 12:31:22.477: INFO: stderr: ""
Jan 11 12:31:22.477: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan 11 12:31:22.487: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:31:22.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mgqpp" for this suite.
Jan 11 12:31:28.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:31:28.758: INFO: namespace: e2e-tests-kubectl-mgqpp, resource: bindings, ignored listing per whitelist
Jan 11 12:31:28.770: INFO: namespace e2e-tests-kubectl-mgqpp deletion completed in 6.250212621s

S [SKIPPING] [6.665 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan 11 12:31:22.488: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:31:28.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-4a7a97ce-346e-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 11 12:31:29.088: INFO: Waiting up to 5m0s for pod "pod-secrets-4a944309-346e-11ea-b0bd-0242ac110005" in namespace "e2e-tests-secrets-2rhbv" to be "success or failure"
Jan 11 12:31:29.097: INFO: Pod "pod-secrets-4a944309-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.760202ms
Jan 11 12:31:31.128: INFO: Pod "pod-secrets-4a944309-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03937527s
Jan 11 12:31:33.144: INFO: Pod "pod-secrets-4a944309-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055195477s
Jan 11 12:31:35.500: INFO: Pod "pod-secrets-4a944309-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411864166s
Jan 11 12:31:37.717: INFO: Pod "pod-secrets-4a944309-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.628228105s
Jan 11 12:31:39.736: INFO: Pod "pod-secrets-4a944309-346e-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.647552353s
STEP: Saw pod success
Jan 11 12:31:39.736: INFO: Pod "pod-secrets-4a944309-346e-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:31:39.744: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4a944309-346e-11ea-b0bd-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan 11 12:31:40.184: INFO: Waiting for pod pod-secrets-4a944309-346e-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:31:40.192: INFO: Pod pod-secrets-4a944309-346e-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:31:40.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2rhbv" for this suite.
Jan 11 12:31:46.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:31:46.438: INFO: namespace: e2e-tests-secrets-2rhbv, resource: bindings, ignored listing per whitelist
Jan 11 12:31:46.462: INFO: namespace e2e-tests-secrets-2rhbv deletion completed in 6.260215072s

• [SLOW TEST:17.691 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:31:46.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-zdcfh
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-zdcfh
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-zdcfh
Jan 11 12:31:46.863: INFO: Found 0 stateful pods, waiting for 1
Jan 11 12:31:56.883: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jan 11 12:32:06.882: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 11 12:32:06.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zdcfh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 11 12:32:07.671: INFO: stderr: "I0111 12:32:07.209477    2340 log.go:172] (0xc0006f22c0) (0xc000736640) Create stream\nI0111 12:32:07.209689    2340 log.go:172] (0xc0006f22c0) (0xc000736640) Stream added, broadcasting: 1\nI0111 12:32:07.230852    2340 log.go:172] (0xc0006f22c0) Reply frame received for 1\nI0111 12:32:07.230968    2340 log.go:172] (0xc0006f22c0) (0xc0007366e0) Create stream\nI0111 12:32:07.230997    2340 log.go:172] (0xc0006f22c0) (0xc0007366e0) Stream added, broadcasting: 3\nI0111 12:32:07.233053    2340 log.go:172] (0xc0006f22c0) Reply frame received for 3\nI0111 12:32:07.233097    2340 log.go:172] (0xc0006f22c0) (0xc000792e60) Create stream\nI0111 12:32:07.233112    2340 log.go:172] (0xc0006f22c0) (0xc000792e60) Stream added, broadcasting: 5\nI0111 12:32:07.234290    2340 log.go:172] (0xc0006f22c0) Reply frame received for 5\nI0111 12:32:07.511045    2340 log.go:172] (0xc0006f22c0) Data frame received for 3\nI0111 12:32:07.511136    2340 log.go:172] (0xc0007366e0) (3) Data frame handling\nI0111 12:32:07.511157    2340 log.go:172] (0xc0007366e0) (3) Data frame sent\nI0111 12:32:07.662516    2340 log.go:172] (0xc0006f22c0) (0xc0007366e0) Stream removed, broadcasting: 3\nI0111 12:32:07.662707    2340 log.go:172] (0xc0006f22c0) Data frame received for 1\nI0111 12:32:07.662974    2340 log.go:172] (0xc0006f22c0) (0xc000792e60) Stream removed, broadcasting: 5\nI0111 12:32:07.663044    2340 log.go:172] (0xc000736640) (1) Data frame handling\nI0111 12:32:07.663079    2340 log.go:172] (0xc000736640) (1) Data frame sent\nI0111 12:32:07.663150    2340 log.go:172] (0xc0006f22c0) (0xc000736640) Stream removed, broadcasting: 1\nI0111 12:32:07.663513    2340 log.go:172] (0xc0006f22c0) Go away received\nI0111 12:32:07.663616    2340 log.go:172] (0xc0006f22c0) (0xc000736640) Stream removed, broadcasting: 1\nI0111 12:32:07.663627    2340 log.go:172] (0xc0006f22c0) (0xc0007366e0) Stream removed, broadcasting: 3\nI0111 12:32:07.663639    2340 log.go:172] (0xc0006f22c0) (0xc000792e60) Stream removed, broadcasting: 5\n"
Jan 11 12:32:07.672: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 11 12:32:07.672: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 11 12:32:07.687: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 11 12:32:17.699: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 12:32:17.699: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 12:32:17.790: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 11 12:32:17.791: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:46 +0000 UTC  }]
Jan 11 12:32:17.791: INFO: 
Jan 11 12:32:17.791: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 11 12:32:19.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.931871805s
Jan 11 12:32:20.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.924154721s
Jan 11 12:32:21.835: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.913632727s
Jan 11 12:32:22.866: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.887861243s
Jan 11 12:32:25.492: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.856374255s
Jan 11 12:32:26.854: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.23088307s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-zdcfh
Jan 11 12:32:27.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zdcfh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 11 12:32:28.794: INFO: stderr: "I0111 12:32:28.252947    2361 log.go:172] (0xc000154840) (0xc0005f1540) Create stream\nI0111 12:32:28.253124    2361 log.go:172] (0xc000154840) (0xc0005f1540) Stream added, broadcasting: 1\nI0111 12:32:28.270604    2361 log.go:172] (0xc000154840) Reply frame received for 1\nI0111 12:32:28.270660    2361 log.go:172] (0xc000154840) (0xc0007be000) Create stream\nI0111 12:32:28.270669    2361 log.go:172] (0xc000154840) (0xc0007be000) Stream added, broadcasting: 3\nI0111 12:32:28.272283    2361 log.go:172] (0xc000154840) Reply frame received for 3\nI0111 12:32:28.272307    2361 log.go:172] (0xc000154840) (0xc0005f15e0) Create stream\nI0111 12:32:28.272314    2361 log.go:172] (0xc000154840) (0xc0005f15e0) Stream added, broadcasting: 5\nI0111 12:32:28.273763    2361 log.go:172] (0xc000154840) Reply frame received for 5\nI0111 12:32:28.465212    2361 log.go:172] (0xc000154840) Data frame received for 3\nI0111 12:32:28.465288    2361 log.go:172] (0xc0007be000) (3) Data frame handling\nI0111 12:32:28.465306    2361 log.go:172] (0xc0007be000) (3) Data frame sent\nI0111 12:32:28.785754    2361 log.go:172] (0xc000154840) Data frame received for 1\nI0111 12:32:28.786010    2361 log.go:172] (0xc0005f1540) (1) Data frame handling\nI0111 12:32:28.786030    2361 log.go:172] (0xc0005f1540) (1) Data frame sent\nI0111 12:32:28.786674    2361 log.go:172] (0xc000154840) (0xc0005f1540) Stream removed, broadcasting: 1\nI0111 12:32:28.787875    2361 log.go:172] (0xc000154840) (0xc0005f15e0) Stream removed, broadcasting: 5\nI0111 12:32:28.787955    2361 log.go:172] (0xc000154840) (0xc0007be000) Stream removed, broadcasting: 3\nI0111 12:32:28.787995    2361 log.go:172] (0xc000154840) (0xc0005f1540) Stream removed, broadcasting: 1\nI0111 12:32:28.788020    2361 log.go:172] (0xc000154840) (0xc0007be000) Stream removed, broadcasting: 3\nI0111 12:32:28.788029    2361 log.go:172] (0xc000154840) (0xc0005f15e0) Stream removed, broadcasting: 5\nI0111 12:32:28.788143    2361 log.go:172] (0xc000154840) Go away received\n"
Jan 11 12:32:28.794: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 11 12:32:28.794: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 11 12:32:28.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zdcfh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 11 12:32:29.408: INFO: stderr: "I0111 12:32:29.111025    2384 log.go:172] (0xc000138840) (0xc00058d220) Create stream\nI0111 12:32:29.111196    2384 log.go:172] (0xc000138840) (0xc00058d220) Stream added, broadcasting: 1\nI0111 12:32:29.116333    2384 log.go:172] (0xc000138840) Reply frame received for 1\nI0111 12:32:29.116362    2384 log.go:172] (0xc000138840) (0xc00058d2c0) Create stream\nI0111 12:32:29.116371    2384 log.go:172] (0xc000138840) (0xc00058d2c0) Stream added, broadcasting: 3\nI0111 12:32:29.120224    2384 log.go:172] (0xc000138840) Reply frame received for 3\nI0111 12:32:29.120254    2384 log.go:172] (0xc000138840) (0xc000712000) Create stream\nI0111 12:32:29.120265    2384 log.go:172] (0xc000138840) (0xc000712000) Stream added, broadcasting: 5\nI0111 12:32:29.121841    2384 log.go:172] (0xc000138840) Reply frame received for 5\nI0111 12:32:29.244186    2384 log.go:172] (0xc000138840) Data frame received for 5\nI0111 12:32:29.244251    2384 log.go:172] (0xc000712000) (5) Data frame handling\nI0111 12:32:29.244264    2384 log.go:172] (0xc000712000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0111 12:32:29.244283    2384 log.go:172] (0xc000138840) Data frame received for 3\nI0111 12:32:29.244292    2384 log.go:172] (0xc00058d2c0) (3) Data frame handling\nI0111 12:32:29.244303    2384 log.go:172] (0xc00058d2c0) (3) Data frame sent\nI0111 12:32:29.402940    2384 log.go:172] (0xc000138840) (0xc00058d2c0) Stream removed, broadcasting: 3\nI0111 12:32:29.403065    2384 log.go:172] (0xc000138840) Data frame received for 1\nI0111 12:32:29.403120    2384 log.go:172] (0xc00058d220) (1) Data frame handling\nI0111 12:32:29.403149    2384 log.go:172] (0xc00058d220) (1) Data frame sent\nI0111 12:32:29.403164    2384 log.go:172] (0xc000138840) (0xc00058d220) Stream removed, broadcasting: 1\nI0111 12:32:29.403179    2384 log.go:172] (0xc000138840) (0xc000712000) Stream removed, broadcasting: 5\nI0111 12:32:29.403245    2384 log.go:172] (0xc000138840) Go away received\nI0111 12:32:29.403286    2384 log.go:172] (0xc000138840) (0xc00058d220) Stream removed, broadcasting: 1\nI0111 12:32:29.403301    2384 log.go:172] (0xc000138840) (0xc00058d2c0) Stream removed, broadcasting: 3\nI0111 12:32:29.403309    2384 log.go:172] (0xc000138840) (0xc000712000) Stream removed, broadcasting: 5\n"
Jan 11 12:32:29.408: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 11 12:32:29.408: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 11 12:32:29.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zdcfh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 11 12:32:29.875: INFO: stderr: "I0111 12:32:29.591412    2406 log.go:172] (0xc000714370) (0xc00073a5a0) Create stream\nI0111 12:32:29.591633    2406 log.go:172] (0xc000714370) (0xc00073a5a0) Stream added, broadcasting: 1\nI0111 12:32:29.595788    2406 log.go:172] (0xc000714370) Reply frame received for 1\nI0111 12:32:29.595827    2406 log.go:172] (0xc000714370) (0xc0005d6d20) Create stream\nI0111 12:32:29.595840    2406 log.go:172] (0xc000714370) (0xc0005d6d20) Stream added, broadcasting: 3\nI0111 12:32:29.596896    2406 log.go:172] (0xc000714370) Reply frame received for 3\nI0111 12:32:29.596917    2406 log.go:172] (0xc000714370) (0xc00064c000) Create stream\nI0111 12:32:29.596925    2406 log.go:172] (0xc000714370) (0xc00064c000) Stream added, broadcasting: 5\nI0111 12:32:29.598400    2406 log.go:172] (0xc000714370) Reply frame received for 5\nI0111 12:32:29.703062    2406 log.go:172] (0xc000714370) Data frame received for 5\nI0111 12:32:29.703161    2406 log.go:172] (0xc00064c000) (5) Data frame handling\nI0111 12:32:29.703173    2406 log.go:172] (0xc00064c000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0111 12:32:29.703192    2406 log.go:172] (0xc000714370) Data frame received for 3\nI0111 12:32:29.703209    2406 log.go:172] (0xc0005d6d20) (3) Data frame handling\nI0111 12:32:29.703216    2406 log.go:172] (0xc0005d6d20) (3) Data frame sent\nI0111 12:32:29.867739    2406 log.go:172] (0xc000714370) Data frame received for 1\nI0111 12:32:29.867828    2406 log.go:172] (0xc00073a5a0) (1) Data frame handling\nI0111 12:32:29.867878    2406 log.go:172] (0xc00073a5a0) (1) Data frame sent\nI0111 12:32:29.867893    2406 log.go:172] (0xc000714370) (0xc00073a5a0) Stream removed, broadcasting: 1\nI0111 12:32:29.868797    2406 log.go:172] (0xc000714370) (0xc0005d6d20) Stream removed, broadcasting: 3\nI0111 12:32:29.868846    2406 log.go:172] (0xc000714370) (0xc00064c000) Stream removed, broadcasting: 5\nI0111 12:32:29.868887    2406 log.go:172] (0xc000714370) Go away received\nI0111 12:32:29.868974    2406 log.go:172] (0xc000714370) (0xc00073a5a0) Stream removed, broadcasting: 1\nI0111 12:32:29.868994    2406 log.go:172] (0xc000714370) (0xc0005d6d20) Stream removed, broadcasting: 3\nI0111 12:32:29.869001    2406 log.go:172] (0xc000714370) (0xc00064c000) Stream removed, broadcasting: 5\n"
Jan 11 12:32:29.875: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 11 12:32:29.875: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 11 12:32:29.907: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 12:32:29.907: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false
Jan 11 12:32:39.942: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 12:32:39.942: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 12:32:39.942: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 11 12:32:39.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zdcfh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 11 12:32:40.550: INFO: stderr: "I0111 12:32:40.146681    2428 log.go:172] (0xc00084e2c0) (0xc000710640) Create stream\nI0111 12:32:40.147212    2428 log.go:172] (0xc00084e2c0) (0xc000710640) Stream added, broadcasting: 1\nI0111 12:32:40.153189    2428 log.go:172] (0xc00084e2c0) Reply frame received for 1\nI0111 12:32:40.153219    2428 log.go:172] (0xc00084e2c0) (0xc00064cdc0) Create stream\nI0111 12:32:40.153228    2428 log.go:172] (0xc00084e2c0) (0xc00064cdc0) Stream added, broadcasting: 3\nI0111 12:32:40.154265    2428 log.go:172] (0xc00084e2c0) Reply frame received for 3\nI0111 12:32:40.154305    2428 log.go:172] (0xc00084e2c0) (0xc00064cf00) Create stream\nI0111 12:32:40.154320    2428 log.go:172] (0xc00084e2c0) (0xc00064cf00) Stream added, broadcasting: 5\nI0111 12:32:40.156018    2428 log.go:172] (0xc00084e2c0) Reply frame received for 5\nI0111 12:32:40.310679    2428 log.go:172] (0xc00084e2c0) Data frame received for 3\nI0111 12:32:40.310754    2428 log.go:172] (0xc00064cdc0) (3) Data frame handling\nI0111 12:32:40.310775    2428 log.go:172] (0xc00064cdc0) (3) Data frame sent\nI0111 12:32:40.543020    2428 log.go:172] (0xc00084e2c0) Data frame received for 1\nI0111 12:32:40.543172    2428 log.go:172] (0xc000710640) (1) Data frame handling\nI0111 12:32:40.543211    2428 log.go:172] (0xc000710640) (1) Data frame sent\nI0111 12:32:40.543290    2428 log.go:172] (0xc00084e2c0) (0xc000710640) Stream removed, broadcasting: 1\nI0111 12:32:40.543488    2428 log.go:172] (0xc00084e2c0) (0xc00064cf00) Stream removed, broadcasting: 5\nI0111 12:32:40.543558    2428 log.go:172] (0xc00084e2c0) (0xc00064cdc0) Stream removed, broadcasting: 3\nI0111 12:32:40.543675    2428 log.go:172] (0xc00084e2c0) (0xc000710640) Stream removed, broadcasting: 1\nI0111 12:32:40.543705    2428 log.go:172] (0xc00084e2c0) (0xc00064cdc0) Stream removed, broadcasting: 3\nI0111 12:32:40.543717    2428 log.go:172] (0xc00084e2c0) (0xc00064cf00) Stream removed, broadcasting: 5\nI0111 12:32:40.544048    2428 log.go:172] (0xc00084e2c0) Go away received\n"
Jan 11 12:32:40.551: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 11 12:32:40.551: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 11 12:32:40.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zdcfh ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 11 12:32:41.186: INFO: stderr: "I0111 12:32:40.756766    2450 log.go:172] (0xc0008162c0) (0xc0006f2640) Create stream\nI0111 12:32:40.756961    2450 log.go:172] (0xc0008162c0) (0xc0006f2640) Stream added, broadcasting: 1\nI0111 12:32:40.761480    2450 log.go:172] (0xc0008162c0) Reply frame received for 1\nI0111 12:32:40.761518    2450 log.go:172] (0xc0008162c0) (0xc0005c2d20) Create stream\nI0111 12:32:40.761532    2450 log.go:172] (0xc0008162c0) (0xc0005c2d20) Stream added, broadcasting: 3\nI0111 12:32:40.762802    2450 log.go:172] (0xc0008162c0) Reply frame received for 3\nI0111 12:32:40.762836    2450 log.go:172] (0xc0008162c0) (0xc0006f26e0) Create stream\nI0111 12:32:40.762870    2450 log.go:172] (0xc0008162c0) (0xc0006f26e0) Stream added, broadcasting: 5\nI0111 12:32:40.764572    2450 log.go:172] (0xc0008162c0) Reply frame received for 5\nI0111 12:32:41.025806    2450 log.go:172] (0xc0008162c0) Data frame received for 3\nI0111 12:32:41.025926    2450 log.go:172] (0xc0005c2d20) (3) Data frame handling\nI0111 12:32:41.025982    2450 log.go:172] (0xc0005c2d20) (3) Data frame sent\nI0111 12:32:41.182147    2450 log.go:172] (0xc0008162c0) Data frame received for 1\nI0111 12:32:41.182184    2450 log.go:172] (0xc0006f2640) (1) Data frame handling\nI0111 12:32:41.182203    2450 log.go:172] (0xc0006f2640) (1) Data frame sent\nI0111 12:32:41.182216    2450 log.go:172] (0xc0008162c0) (0xc0006f2640) Stream removed, broadcasting: 1\nI0111 12:32:41.182352    2450 log.go:172] (0xc0008162c0) (0xc0005c2d20) Stream removed, broadcasting: 3\nI0111 12:32:41.182492    2450 log.go:172] (0xc0008162c0) (0xc0006f26e0) Stream removed, broadcasting: 5\nI0111 12:32:41.182525    2450 log.go:172] (0xc0008162c0) (0xc0006f2640) Stream removed, broadcasting: 1\nI0111 12:32:41.182538    2450 log.go:172] (0xc0008162c0) (0xc0005c2d20) Stream removed, broadcasting: 3\nI0111 12:32:41.182570    2450 log.go:172] (0xc0008162c0) (0xc0006f26e0) Stream removed, broadcasting: 5\n"
Jan 11 12:32:41.187: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 11 12:32:41.187: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 11 12:32:41.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-zdcfh ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 11 12:32:41.658: INFO: stderr: "I0111 12:32:41.366602    2472 log.go:172] (0xc0008be210) (0xc0008ba5a0) Create stream\nI0111 12:32:41.366764    2472 log.go:172] (0xc0008be210) (0xc0008ba5a0) Stream added, broadcasting: 1\nI0111 12:32:41.373216    2472 log.go:172] (0xc0008be210) Reply frame received for 1\nI0111 12:32:41.373246    2472 log.go:172] (0xc0008be210) (0xc000606b40) Create stream\nI0111 12:32:41.373253    2472 log.go:172] (0xc0008be210) (0xc000606b40) Stream added, broadcasting: 3\nI0111 12:32:41.374501    2472 log.go:172] (0xc0008be210) Reply frame received for 3\nI0111 12:32:41.374521    2472 log.go:172] (0xc0008be210) (0xc0008ba640) Create stream\nI0111 12:32:41.374526    2472 log.go:172] (0xc0008be210) (0xc0008ba640) Stream added, broadcasting: 5\nI0111 12:32:41.375862    2472 log.go:172] (0xc0008be210) Reply frame received for 5\nI0111 12:32:41.545570    2472 log.go:172] (0xc0008be210) Data frame received for 3\nI0111 12:32:41.545637    2472 log.go:172] (0xc000606b40) (3) Data frame handling\nI0111 12:32:41.545650    2472 log.go:172] (0xc000606b40) (3) Data frame sent\nI0111 12:32:41.651803    2472 log.go:172] (0xc0008be210) Data frame received for 1\nI0111 12:32:41.652130    2472 log.go:172] (0xc0008be210) (0xc000606b40) Stream removed, broadcasting: 3\nI0111 12:32:41.652259    2472 log.go:172] (0xc0008ba5a0) (1) Data frame handling\nI0111 12:32:41.652278    2472 log.go:172] (0xc0008ba5a0) (1) Data frame sent\nI0111 12:32:41.652296    2472 log.go:172] (0xc0008be210) (0xc0008ba640) Stream removed, broadcasting: 5\nI0111 12:32:41.652335    2472 log.go:172] (0xc0008be210) (0xc0008ba5a0) Stream removed, broadcasting: 1\nI0111 12:32:41.652355    2472 log.go:172] (0xc0008be210) Go away received\nI0111 12:32:41.652546    2472 log.go:172] (0xc0008be210) (0xc0008ba5a0) Stream removed, broadcasting: 1\nI0111 12:32:41.652571    2472 log.go:172] (0xc0008be210) (0xc000606b40) Stream removed, broadcasting: 3\nI0111 12:32:41.652580    2472 log.go:172] (0xc0008be210) (0xc0008ba640) Stream removed, broadcasting: 5\n"
Jan 11 12:32:41.659: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 11 12:32:41.659: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 11 12:32:41.659: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 12:32:41.672: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 11 12:32:51.690: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 12:32:51.690: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 12:32:51.690: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 11 12:32:51.722: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 11 12:32:51.722: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:46 +0000 UTC  }]
Jan 11 12:32:51.723: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:51.723: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:51.723: INFO: 
Jan 11 12:32:51.723: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 11 12:32:52.738: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 11 12:32:52.738: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:46 +0000 UTC  }]
Jan 11 12:32:52.738: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:52.738: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:52.738: INFO: 
Jan 11 12:32:52.738: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 11 12:32:54.208: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 11 12:32:54.208: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:46 +0000 UTC  }]
Jan 11 12:32:54.208: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:54.208: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:54.209: INFO: 
Jan 11 12:32:54.209: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 11 12:32:55.221: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 11 12:32:55.221: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:46 +0000 UTC  }]
Jan 11 12:32:55.221: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:55.221: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:55.221: INFO: 
Jan 11 12:32:55.221: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 11 12:32:56.246: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 11 12:32:56.246: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:46 +0000 UTC  }]
Jan 11 12:32:56.246: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:56.246: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:56.246: INFO: 
Jan 11 12:32:56.246: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 11 12:32:58.094: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 11 12:32:58.094: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:46 +0000 UTC  }]
Jan 11 12:32:58.094: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:58.094: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:58.094: INFO: 
Jan 11 12:32:58.094: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 11 12:32:59.112: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 11 12:32:59.112: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:46 +0000 UTC  }]
Jan 11 12:32:59.112: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:59.112: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:32:59.112: INFO: 
Jan 11 12:32:59.112: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 11 12:33:00.219: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 11 12:33:00.219: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:46 +0000 UTC  }]
Jan 11 12:33:00.220: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:33:00.220: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:33:00.220: INFO: 
Jan 11 12:33:00.220: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 11 12:33:01.237: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 11 12:33:01.237: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:31:46 +0000 UTC  }]
Jan 11 12:33:01.238: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:33:01.238: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  0s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:32:17 +0000 UTC  }]
Jan 11 12:33:01.238: INFO: 
Jan 11 12:33:01.238: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-zdcfh
Jan 11 12:33:02.249: INFO: Scaling statefulset ss to 0
Jan 11 12:33:02.265: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 11 12:33:02.268: INFO: Deleting all statefulset in ns e2e-tests-statefulset-zdcfh
Jan 11 12:33:02.271: INFO: Scaling statefulset ss to 0
Jan 11 12:33:02.282: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 12:33:02.285: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:33:02.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-zdcfh" for this suite.
Jan 11 12:33:10.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:33:10.552: INFO: namespace: e2e-tests-statefulset-zdcfh, resource: bindings, ignored listing per whitelist
Jan 11 12:33:10.692: INFO: namespace e2e-tests-statefulset-zdcfh deletion completed in 8.327519404s

• [SLOW TEST:84.230 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:33:10.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0111 12:33:21.492177       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 11 12:33:21.492: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:33:21.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-dbgp4" for this suite.
Jan 11 12:33:27.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:33:27.621: INFO: namespace: e2e-tests-gc-dbgp4, resource: bindings, ignored listing per whitelist
Jan 11 12:33:27.804: INFO: namespace e2e-tests-gc-dbgp4 deletion completed in 6.253957874s

• [SLOW TEST:17.112 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:33:27.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 11 12:33:27.985: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:33:46.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-7x9zk" for this suite.
Jan 11 12:33:52.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:33:52.913: INFO: namespace: e2e-tests-init-container-7x9zk, resource: bindings, ignored listing per whitelist
Jan 11 12:33:52.956: INFO: namespace e2e-tests-init-container-7x9zk deletion completed in 6.21068495s

• [SLOW TEST:25.152 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:33:52.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 11 12:33:53.134: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-cs69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-cs69c/configmaps/e2e-watch-test-label-changed,UID:a06aa498-346e-11ea-a994-fa163e34d433,ResourceVersion:17926122,Generation:0,CreationTimestamp:2020-01-11 12:33:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 11 12:33:53.134: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-cs69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-cs69c/configmaps/e2e-watch-test-label-changed,UID:a06aa498-346e-11ea-a994-fa163e34d433,ResourceVersion:17926123,Generation:0,CreationTimestamp:2020-01-11 12:33:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 11 12:33:53.134: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-cs69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-cs69c/configmaps/e2e-watch-test-label-changed,UID:a06aa498-346e-11ea-a994-fa163e34d433,ResourceVersion:17926124,Generation:0,CreationTimestamp:2020-01-11 12:33:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 11 12:34:03.208: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-cs69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-cs69c/configmaps/e2e-watch-test-label-changed,UID:a06aa498-346e-11ea-a994-fa163e34d433,ResourceVersion:17926138,Generation:0,CreationTimestamp:2020-01-11 12:33:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 11 12:34:03.209: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-cs69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-cs69c/configmaps/e2e-watch-test-label-changed,UID:a06aa498-346e-11ea-a994-fa163e34d433,ResourceVersion:17926139,Generation:0,CreationTimestamp:2020-01-11 12:33:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 11 12:34:03.209: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-cs69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-cs69c/configmaps/e2e-watch-test-label-changed,UID:a06aa498-346e-11ea-a994-fa163e34d433,ResourceVersion:17926140,Generation:0,CreationTimestamp:2020-01-11 12:33:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:34:03.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-cs69c" for this suite.
Jan 11 12:34:09.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:34:09.334: INFO: namespace: e2e-tests-watch-cs69c, resource: bindings, ignored listing per whitelist
Jan 11 12:34:09.420: INFO: namespace e2e-tests-watch-cs69c deletion completed in 6.202657353s

• [SLOW TEST:16.463 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:34:09.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:34:09.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-wjfp6" for this suite.
Jan 11 12:34:15.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:34:16.010: INFO: namespace: e2e-tests-kubelet-test-wjfp6, resource: bindings, ignored listing per whitelist
Jan 11 12:34:16.123: INFO: namespace e2e-tests-kubelet-test-wjfp6 deletion completed in 6.215841311s

• [SLOW TEST:6.703 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:34:16.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 12:34:17.337: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aebb8f07-346e-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-6747q" to be "success or failure"
Jan 11 12:34:17.362: INFO: Pod "downwardapi-volume-aebb8f07-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.372981ms
Jan 11 12:34:19.535: INFO: Pod "downwardapi-volume-aebb8f07-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197219324s
Jan 11 12:34:21.546: INFO: Pod "downwardapi-volume-aebb8f07-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208527686s
Jan 11 12:34:23.566: INFO: Pod "downwardapi-volume-aebb8f07-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.229170918s
Jan 11 12:34:25.578: INFO: Pod "downwardapi-volume-aebb8f07-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.240448116s
Jan 11 12:34:28.695: INFO: Pod "downwardapi-volume-aebb8f07-346e-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.357827459s
STEP: Saw pod success
Jan 11 12:34:28.695: INFO: Pod "downwardapi-volume-aebb8f07-346e-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:34:28.731: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-aebb8f07-346e-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 12:34:28.876: INFO: Waiting for pod downwardapi-volume-aebb8f07-346e-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:34:28.883: INFO: Pod downwardapi-volume-aebb8f07-346e-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:34:28.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6747q" for this suite.
Jan 11 12:34:34.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:34:35.016: INFO: namespace: e2e-tests-projected-6747q, resource: bindings, ignored listing per whitelist
Jan 11 12:34:35.072: INFO: namespace e2e-tests-projected-6747q deletion completed in 6.182408884s

• [SLOW TEST:18.947 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:34:35.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 11 12:34:35.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-nqpsr'
Jan 11 12:34:36.954: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 11 12:34:36.954: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan 11 12:34:41.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-nqpsr'
Jan 11 12:34:41.654: INFO: stderr: ""
Jan 11 12:34:41.654: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:34:41.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nqpsr" for this suite.
Jan 11 12:34:49.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:34:49.962: INFO: namespace: e2e-tests-kubectl-nqpsr, resource: bindings, ignored listing per whitelist
Jan 11 12:34:49.987: INFO: namespace e2e-tests-kubectl-nqpsr deletion completed in 8.322773242s

• [SLOW TEST:14.915 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:34:49.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 11 12:34:50.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-fsk4s'
Jan 11 12:34:50.435: INFO: stderr: ""
Jan 11 12:34:50.435: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan 11 12:34:50.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-fsk4s'
Jan 11 12:34:51.007: INFO: stderr: ""
Jan 11 12:34:51.007: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:34:51.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fsk4s" for this suite.
Jan 11 12:34:57.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:34:57.119: INFO: namespace: e2e-tests-kubectl-fsk4s, resource: bindings, ignored listing per whitelist
Jan 11 12:34:57.233: INFO: namespace e2e-tests-kubectl-fsk4s deletion completed in 6.208695513s

• [SLOW TEST:7.246 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:34:57.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:35:09.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-jh5sh" for this suite.
Jan 11 12:35:15.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:35:15.912: INFO: namespace: e2e-tests-kubelet-test-jh5sh, resource: bindings, ignored listing per whitelist
Jan 11 12:35:15.917: INFO: namespace e2e-tests-kubelet-test-jh5sh deletion completed in 6.286824841s

• [SLOW TEST:18.683 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:35:15.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-m78fv
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-m78fv
STEP: Deleting pre-stop pod
Jan 11 12:35:41.895: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:35:41.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-m78fv" for this suite.
Jan 11 12:36:22.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:36:22.374: INFO: namespace: e2e-tests-prestop-m78fv, resource: bindings, ignored listing per whitelist
Jan 11 12:36:22.444: INFO: namespace e2e-tests-prestop-m78fv deletion completed in 40.343874053s

• [SLOW TEST:66.527 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:36:22.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f998cc4d-346e-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 11 12:36:22.697: INFO: Waiting up to 5m0s for pod "pod-secrets-f999912e-346e-11ea-b0bd-0242ac110005" in namespace "e2e-tests-secrets-tf7cr" to be "success or failure"
Jan 11 12:36:22.708: INFO: Pod "pod-secrets-f999912e-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.73583ms
Jan 11 12:36:24.747: INFO: Pod "pod-secrets-f999912e-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049993455s
Jan 11 12:36:26.762: INFO: Pod "pod-secrets-f999912e-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065140324s
Jan 11 12:36:28.850: INFO: Pod "pod-secrets-f999912e-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152881055s
Jan 11 12:36:30.872: INFO: Pod "pod-secrets-f999912e-346e-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174330304s
Jan 11 12:36:32.894: INFO: Pod "pod-secrets-f999912e-346e-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.196484452s
STEP: Saw pod success
Jan 11 12:36:32.894: INFO: Pod "pod-secrets-f999912e-346e-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:36:32.907: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f999912e-346e-11ea-b0bd-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 11 12:36:33.153: INFO: Waiting for pod pod-secrets-f999912e-346e-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:36:33.170: INFO: Pod pod-secrets-f999912e-346e-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:36:33.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-tf7cr" for this suite.
Jan 11 12:36:39.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:36:39.358: INFO: namespace: e2e-tests-secrets-tf7cr, resource: bindings, ignored listing per whitelist
Jan 11 12:36:39.425: INFO: namespace e2e-tests-secrets-tf7cr deletion completed in 6.243321768s

• [SLOW TEST:16.980 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:36:39.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 11 12:36:48.460: INFO: 10 pods remaining
Jan 11 12:36:48.460: INFO: 10 pods has nil DeletionTimestamp
Jan 11 12:36:48.460: INFO: 
Jan 11 12:36:52.145: INFO: 7 pods remaining
Jan 11 12:36:52.146: INFO: 0 pods has nil DeletionTimestamp
Jan 11 12:36:52.146: INFO: 
Jan 11 12:36:52.475: INFO: 0 pods remaining
Jan 11 12:36:52.476: INFO: 0 pods has nil DeletionTimestamp
Jan 11 12:36:52.476: INFO: 
STEP: Gathering metrics
W0111 12:36:53.218925       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 11 12:36:53.218: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:36:53.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-8bfjx" for this suite.
Jan 11 12:37:05.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:37:06.443: INFO: namespace: e2e-tests-gc-8bfjx, resource: bindings, ignored listing per whitelist
Jan 11 12:37:06.627: INFO: namespace e2e-tests-gc-8bfjx deletion completed in 13.405125094s

• [SLOW TEST:27.202 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:37:06.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan 11 12:37:16.921: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-13eafcd8-346f-11ea-b0bd-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-c9l46", SelfLink:"/api/v1/namespaces/e2e-tests-pods-c9l46/pods/pod-submit-remove-13eafcd8-346f-11ea-b0bd-0242ac110005", UID:"13ed0dbf-346f-11ea-a994-fa163e34d433", ResourceVersion:"17926661", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714343026, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"840301517"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6wxf9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002a6c200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6wxf9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a9e228), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00263c0c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a9e260)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a9e280)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002a9e288), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002a9e28c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343026, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343036, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343036, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343026, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00242c740), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00242c760), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://fb0c95a4e9016deb0864bc0eb9bc19a48b2c6ae38bbeb541bc1707ae8612dab5"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:37:24.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-c9l46" for this suite.
Jan 11 12:37:32.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:37:32.878: INFO: namespace: e2e-tests-pods-c9l46, resource: bindings, ignored listing per whitelist
Jan 11 12:37:32.982: INFO: namespace e2e-tests-pods-c9l46 deletion completed in 8.257754366s

• [SLOW TEST:26.354 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:37:32.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan 11 12:37:33.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rvk5q'
Jan 11 12:37:33.680: INFO: stderr: ""
Jan 11 12:37:33.680: INFO: stdout: "pod/pause created\n"
Jan 11 12:37:33.680: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 11 12:37:33.680: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-rvk5q" to be "running and ready"
Jan 11 12:37:33.729: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 48.683243ms
Jan 11 12:37:35.739: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059397628s
Jan 11 12:37:37.754: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074587992s
Jan 11 12:37:39.930: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250152026s
Jan 11 12:37:41.945: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265450355s
Jan 11 12:37:43.969: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.289468324s
Jan 11 12:37:43.969: INFO: Pod "pause" satisfied condition "running and ready"
Jan 11 12:37:43.970: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 11 12:37:43.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-rvk5q'
Jan 11 12:37:44.198: INFO: stderr: ""
Jan 11 12:37:44.199: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 11 12:37:44.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-rvk5q'
Jan 11 12:37:44.327: INFO: stderr: ""
Jan 11 12:37:44.327: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 11 12:37:44.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-rvk5q'
Jan 11 12:37:44.520: INFO: stderr: ""
Jan 11 12:37:44.520: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 11 12:37:44.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-rvk5q'
Jan 11 12:37:44.647: INFO: stderr: ""
Jan 11 12:37:44.647: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan 11 12:37:44.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rvk5q'
Jan 11 12:37:44.871: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 11 12:37:44.871: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 11 12:37:44.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-rvk5q'
Jan 11 12:37:45.049: INFO: stderr: "No resources found.\n"
Jan 11 12:37:45.049: INFO: stdout: ""
Jan 11 12:37:45.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-rvk5q -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 11 12:37:45.163: INFO: stderr: ""
Jan 11 12:37:45.163: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:37:45.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rvk5q" for this suite.
Jan 11 12:37:51.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:37:51.564: INFO: namespace: e2e-tests-kubectl-rvk5q, resource: bindings, ignored listing per whitelist
Jan 11 12:37:51.633: INFO: namespace e2e-tests-kubectl-rvk5q deletion completed in 6.372422007s

• [SLOW TEST:18.651 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:37:51.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 11 12:38:02.636: INFO: Successfully updated pod "pod-update-2ebac8c5-346f-11ea-b0bd-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan 11 12:38:02.671: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:38:02.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ls97q" for this suite.
Jan 11 12:38:26.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:38:26.823: INFO: namespace: e2e-tests-pods-ls97q, resource: bindings, ignored listing per whitelist
Jan 11 12:38:26.907: INFO: namespace e2e-tests-pods-ls97q deletion completed in 24.222651957s

• [SLOW TEST:35.274 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:38:26.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 11 12:38:27.142: INFO: Creating deployment "test-recreate-deployment"
Jan 11 12:38:27.150: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 11 12:38:27.166: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan 11 12:38:29.188: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 11 12:38:29.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 12:38:31.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 12:38:33.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 12:38:35.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343107, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 11 12:38:37.220: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 11 12:38:37.237: INFO: Updating deployment test-recreate-deployment
Jan 11 12:38:37.237: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 11 12:38:37.819: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-mdvdg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mdvdg/deployments/test-recreate-deployment,UID:43c81617-346f-11ea-a994-fa163e34d433,ResourceVersion:17926862,Generation:2,CreationTimestamp:2020-01-11 12:38:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-11 12:38:37 +0000 UTC 2020-01-11 12:38:37 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-11 12:38:37 +0000 UTC 2020-01-11 12:38:27 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 11 12:38:37.842: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-mdvdg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mdvdg/replicasets/test-recreate-deployment-589c4bfd,UID:49f3eb88-346f-11ea-a994-fa163e34d433,ResourceVersion:17926860,Generation:1,CreationTimestamp:2020-01-11 12:38:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 43c81617-346f-11ea-a994-fa163e34d433 0xc0026d595f 0xc0026d5970}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 11 12:38:37.842: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 11 12:38:37.843: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-mdvdg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mdvdg/replicasets/test-recreate-deployment-5bf7f65dc,UID:43cb8b71-346f-11ea-a994-fa163e34d433,ResourceVersion:17926851,Generation:2,CreationTimestamp:2020-01-11 12:38:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 43c81617-346f-11ea-a994-fa163e34d433 0xc0026d5ae0 0xc0026d5ae1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 11 12:38:37.912: INFO: Pod "test-recreate-deployment-589c4bfd-cpjpt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-cpjpt,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-mdvdg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mdvdg/pods/test-recreate-deployment-589c4bfd-cpjpt,UID:49fe0d88-346f-11ea-a994-fa163e34d433,ResourceVersion:17926864,Generation:0,CreationTimestamp:2020-01-11 12:38:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 49f3eb88-346f-11ea-a994-fa163e34d433 0xc0025de61f 0xc0025de630}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mt2rj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mt2rj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-mt2rj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025de690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025de6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:38:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:38:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:38:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-11 12:38:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-11 12:38:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:38:37.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-mdvdg" for this suite.
Jan 11 12:38:45.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:38:45.995: INFO: namespace: e2e-tests-deployment-mdvdg, resource: bindings, ignored listing per whitelist
Jan 11 12:38:46.818: INFO: namespace e2e-tests-deployment-mdvdg deletion completed in 8.897472119s

• [SLOW TEST:19.911 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:38:46.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 11 12:38:47.018: INFO: Waiting up to 5m0s for pod "pod-4f9f1287-346f-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-m92qf" to be "success or failure"
Jan 11 12:38:47.199: INFO: Pod "pod-4f9f1287-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 180.234347ms
Jan 11 12:38:49.308: INFO: Pod "pod-4f9f1287-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289307633s
Jan 11 12:38:51.339: INFO: Pod "pod-4f9f1287-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320661192s
Jan 11 12:38:53.412: INFO: Pod "pod-4f9f1287-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.39383264s
Jan 11 12:38:56.280: INFO: Pod "pod-4f9f1287-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.261863165s
Jan 11 12:38:58.294: INFO: Pod "pod-4f9f1287-346f-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.275660597s
STEP: Saw pod success
Jan 11 12:38:58.294: INFO: Pod "pod-4f9f1287-346f-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:38:58.299: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4f9f1287-346f-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 12:38:58.636: INFO: Waiting for pod pod-4f9f1287-346f-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:38:58.703: INFO: Pod pod-4f9f1287-346f-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:38:58.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-m92qf" for this suite.
Jan 11 12:39:04.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:39:04.919: INFO: namespace: e2e-tests-emptydir-m92qf, resource: bindings, ignored listing per whitelist
Jan 11 12:39:05.086: INFO: namespace e2e-tests-emptydir-m92qf deletion completed in 6.364321215s

• [SLOW TEST:18.268 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:39:05.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 12:39:05.298: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a8376bf-346f-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-vzc55" to be "success or failure"
Jan 11 12:39:05.313: INFO: Pod "downwardapi-volume-5a8376bf-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.527968ms
Jan 11 12:39:07.401: INFO: Pod "downwardapi-volume-5a8376bf-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103579138s
Jan 11 12:39:09.420: INFO: Pod "downwardapi-volume-5a8376bf-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122306611s
Jan 11 12:39:11.437: INFO: Pod "downwardapi-volume-5a8376bf-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139021213s
Jan 11 12:39:13.470: INFO: Pod "downwardapi-volume-5a8376bf-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172103276s
Jan 11 12:39:15.490: INFO: Pod "downwardapi-volume-5a8376bf-346f-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.192369408s
STEP: Saw pod success
Jan 11 12:39:15.490: INFO: Pod "downwardapi-volume-5a8376bf-346f-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:39:15.503: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5a8376bf-346f-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 12:39:16.605: INFO: Waiting for pod downwardapi-volume-5a8376bf-346f-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:39:16.766: INFO: Pod downwardapi-volume-5a8376bf-346f-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:39:16.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vzc55" for this suite.
Jan 11 12:39:22.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:39:22.987: INFO: namespace: e2e-tests-projected-vzc55, resource: bindings, ignored listing per whitelist
Jan 11 12:39:23.053: INFO: namespace e2e-tests-projected-vzc55 deletion completed in 6.27530795s

• [SLOW TEST:17.966 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:39:23.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 11 12:39:43.411: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 11 12:39:43.430: INFO: Pod pod-with-prestop-http-hook still exists
Jan 11 12:39:45.430: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 11 12:39:45.448: INFO: Pod pod-with-prestop-http-hook still exists
Jan 11 12:39:47.430: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 11 12:39:47.464: INFO: Pod pod-with-prestop-http-hook still exists
Jan 11 12:39:49.430: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 11 12:39:49.445: INFO: Pod pod-with-prestop-http-hook still exists
Jan 11 12:39:51.430: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 11 12:39:51.498: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:39:51.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qrz6j" for this suite.
Jan 11 12:40:15.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:40:15.633: INFO: namespace: e2e-tests-container-lifecycle-hook-qrz6j, resource: bindings, ignored listing per whitelist
Jan 11 12:40:15.749: INFO: namespace e2e-tests-container-lifecycle-hook-qrz6j deletion completed in 24.190646231s

• [SLOW TEST:52.695 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:40:15.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-849eb9f9-346f-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 11 12:40:15.960: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-849fdb6a-346f-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-r9rkr" to be "success or failure"
Jan 11 12:40:15.968: INFO: Pod "pod-projected-configmaps-849fdb6a-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.191933ms
Jan 11 12:40:17.980: INFO: Pod "pod-projected-configmaps-849fdb6a-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019678913s
Jan 11 12:40:19.996: INFO: Pod "pod-projected-configmaps-849fdb6a-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035900236s
Jan 11 12:40:22.313: INFO: Pod "pod-projected-configmaps-849fdb6a-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.352528605s
Jan 11 12:40:24.349: INFO: Pod "pod-projected-configmaps-849fdb6a-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.389052805s
Jan 11 12:40:26.514: INFO: Pod "pod-projected-configmaps-849fdb6a-346f-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.553601531s
STEP: Saw pod success
Jan 11 12:40:26.514: INFO: Pod "pod-projected-configmaps-849fdb6a-346f-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:40:26.530: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-849fdb6a-346f-11ea-b0bd-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 11 12:40:26.802: INFO: Waiting for pod pod-projected-configmaps-849fdb6a-346f-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:40:26.813: INFO: Pod pod-projected-configmaps-849fdb6a-346f-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:40:26.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-r9rkr" for this suite.
Jan 11 12:40:32.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:40:33.279: INFO: namespace: e2e-tests-projected-r9rkr, resource: bindings, ignored listing per whitelist
Jan 11 12:40:33.342: INFO: namespace e2e-tests-projected-r9rkr deletion completed in 6.51404452s

• [SLOW TEST:17.593 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:40:33.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:40:39.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-zqlh9" for this suite.
Jan 11 12:40:45.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:40:45.938: INFO: namespace: e2e-tests-namespaces-zqlh9, resource: bindings, ignored listing per whitelist
Jan 11 12:40:46.106: INFO: namespace e2e-tests-namespaces-zqlh9 deletion completed in 6.267189905s
STEP: Destroying namespace "e2e-tests-nsdeletetest-wckmm" for this suite.
Jan 11 12:40:46.112: INFO: Namespace e2e-tests-nsdeletetest-wckmm was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-fx6k5" for this suite.
Jan 11 12:40:52.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:40:52.210: INFO: namespace: e2e-tests-nsdeletetest-fx6k5, resource: bindings, ignored listing per whitelist
Jan 11 12:40:52.416: INFO: namespace e2e-tests-nsdeletetest-fx6k5 deletion completed in 6.304152734s

• [SLOW TEST:19.073 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:40:52.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-j89v7.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j89v7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-j89v7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-j89v7.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j89v7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-j89v7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 11 12:41:08.735: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.741: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.751: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.759: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.765: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.771: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.776: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j89v7.svc.cluster.local from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.780: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.784: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.789: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.794: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.799: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.804: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.808: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.812: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.815: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.819: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j89v7.svc.cluster.local from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.831: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.836: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.842: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005: the server could not find the requested resource (get pods dns-test-9a78d832-346f-11ea-b0bd-0242ac110005)
Jan 11 12:41:08.842: INFO: Lookups using e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j89v7.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j89v7.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 11 12:41:14.564: INFO: DNS probes using e2e-tests-dns-j89v7/dns-test-9a78d832-346f-11ea-b0bd-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:41:14.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-j89v7" for this suite.
Jan 11 12:41:22.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:41:22.908: INFO: namespace: e2e-tests-dns-j89v7, resource: bindings, ignored listing per whitelist
Jan 11 12:41:22.941: INFO: namespace e2e-tests-dns-j89v7 deletion completed in 8.214376722s

• [SLOW TEST:30.525 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:41:22.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 11 12:41:23.202: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.523066ms)
Jan 11 12:41:23.210: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.710484ms)
Jan 11 12:41:23.217: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.252741ms)
Jan 11 12:41:23.229: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.154065ms)
Jan 11 12:41:23.237: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.513707ms)
Jan 11 12:41:23.246: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.334788ms)
Jan 11 12:41:23.251: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.143002ms)
Jan 11 12:41:23.255: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.091619ms)
Jan 11 12:41:23.259: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.151076ms)
Jan 11 12:41:23.295: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 35.427813ms)
Jan 11 12:41:23.303: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.976573ms)
Jan 11 12:41:23.316: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.328081ms)
Jan 11 12:41:23.322: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.716687ms)
Jan 11 12:41:23.327: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.858916ms)
Jan 11 12:41:23.332: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.583738ms)
Jan 11 12:41:23.338: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.238387ms)
Jan 11 12:41:23.342: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.314133ms)
Jan 11 12:41:23.347: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.212614ms)
Jan 11 12:41:23.353: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.429637ms)
Jan 11 12:41:23.358: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.955864ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:41:23.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-22jqj" for this suite.
Jan 11 12:41:29.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:41:29.519: INFO: namespace: e2e-tests-proxy-22jqj, resource: bindings, ignored listing per whitelist
Jan 11 12:41:29.533: INFO: namespace e2e-tests-proxy-22jqj deletion completed in 6.169843216s

• [SLOW TEST:6.592 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:41:29.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 11 12:41:40.434: INFO: Successfully updated pod "annotationupdateb0a3a432-346f-11ea-b0bd-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:41:42.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bnt4k" for this suite.
Jan 11 12:42:06.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:42:06.774: INFO: namespace: e2e-tests-projected-bnt4k, resource: bindings, ignored listing per whitelist
Jan 11 12:42:06.796: INFO: namespace e2e-tests-projected-bnt4k deletion completed in 24.191618643s

• [SLOW TEST:37.263 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:42:06.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-c6d62442-346f-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 11 12:42:07.071: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6d77a69-346f-11ea-b0bd-0242ac110005" in namespace "e2e-tests-configmap-p9mlr" to be "success or failure"
Jan 11 12:42:07.083: INFO: Pod "pod-configmaps-c6d77a69-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.325897ms
Jan 11 12:42:09.103: INFO: Pod "pod-configmaps-c6d77a69-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032379107s
Jan 11 12:42:11.111: INFO: Pod "pod-configmaps-c6d77a69-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039495454s
Jan 11 12:42:13.123: INFO: Pod "pod-configmaps-c6d77a69-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052225138s
Jan 11 12:42:15.137: INFO: Pod "pod-configmaps-c6d77a69-346f-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065715428s
Jan 11 12:42:17.246: INFO: Pod "pod-configmaps-c6d77a69-346f-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.174438553s
STEP: Saw pod success
Jan 11 12:42:17.246: INFO: Pod "pod-configmaps-c6d77a69-346f-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:42:17.256: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c6d77a69-346f-11ea-b0bd-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 11 12:42:17.327: INFO: Waiting for pod pod-configmaps-c6d77a69-346f-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:42:17.409: INFO: Pod pod-configmaps-c6d77a69-346f-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:42:17.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-p9mlr" for this suite.
Jan 11 12:42:23.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:42:23.624: INFO: namespace: e2e-tests-configmap-p9mlr, resource: bindings, ignored listing per whitelist
Jan 11 12:42:23.724: INFO: namespace e2e-tests-configmap-p9mlr deletion completed in 6.303134547s

• [SLOW TEST:16.928 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:42:23.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-xjd6r in namespace e2e-tests-proxy-9sml5
I0111 12:42:24.266562       9 runners.go:184] Created replication controller with name: proxy-service-xjd6r, namespace: e2e-tests-proxy-9sml5, replica count: 1
I0111 12:42:25.317257       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 12:42:26.317531       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 12:42:27.317823       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 12:42:28.318348       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 12:42:29.318787       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 12:42:30.319303       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 12:42:31.319634       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 12:42:32.320091       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 12:42:33.320660       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0111 12:42:34.321157       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0111 12:42:35.321621       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0111 12:42:36.322066       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0111 12:42:37.322486       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0111 12:42:38.322915       9 runners.go:184] proxy-service-xjd6r Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 11 12:42:38.336: INFO: setup took 14.264216763s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 11 12:42:38.364: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9sml5/pods/proxy-service-xjd6r-kz8vx:160/proxy/: foo (200; 27.236679ms)
Jan 11 12:42:38.382: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-9sml5/pods/http:proxy-service-xjd6r-kz8vx:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 11 12:42:59.205: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-cgmqk,SelfLink:/api/v1/namespaces/e2e-tests-watch-cgmqk/configmaps/e2e-watch-test-configmap-a,UID:e5ef628d-346f-11ea-a994-fa163e34d433,ResourceVersion:17927484,Generation:0,CreationTimestamp:2020-01-11 12:42:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 11 12:42:59.205: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-cgmqk,SelfLink:/api/v1/namespaces/e2e-tests-watch-cgmqk/configmaps/e2e-watch-test-configmap-a,UID:e5ef628d-346f-11ea-a994-fa163e34d433,ResourceVersion:17927484,Generation:0,CreationTimestamp:2020-01-11 12:42:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 11 12:43:09.227: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-cgmqk,SelfLink:/api/v1/namespaces/e2e-tests-watch-cgmqk/configmaps/e2e-watch-test-configmap-a,UID:e5ef628d-346f-11ea-a994-fa163e34d433,ResourceVersion:17927496,Generation:0,CreationTimestamp:2020-01-11 12:42:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 11 12:43:09.227: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-cgmqk,SelfLink:/api/v1/namespaces/e2e-tests-watch-cgmqk/configmaps/e2e-watch-test-configmap-a,UID:e5ef628d-346f-11ea-a994-fa163e34d433,ResourceVersion:17927496,Generation:0,CreationTimestamp:2020-01-11 12:42:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 11 12:43:19.255: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-cgmqk,SelfLink:/api/v1/namespaces/e2e-tests-watch-cgmqk/configmaps/e2e-watch-test-configmap-a,UID:e5ef628d-346f-11ea-a994-fa163e34d433,ResourceVersion:17927508,Generation:0,CreationTimestamp:2020-01-11 12:42:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 11 12:43:19.255: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-cgmqk,SelfLink:/api/v1/namespaces/e2e-tests-watch-cgmqk/configmaps/e2e-watch-test-configmap-a,UID:e5ef628d-346f-11ea-a994-fa163e34d433,ResourceVersion:17927508,Generation:0,CreationTimestamp:2020-01-11 12:42:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 11 12:43:29.278: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-cgmqk,SelfLink:/api/v1/namespaces/e2e-tests-watch-cgmqk/configmaps/e2e-watch-test-configmap-a,UID:e5ef628d-346f-11ea-a994-fa163e34d433,ResourceVersion:17927521,Generation:0,CreationTimestamp:2020-01-11 12:42:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 11 12:43:29.279: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-cgmqk,SelfLink:/api/v1/namespaces/e2e-tests-watch-cgmqk/configmaps/e2e-watch-test-configmap-a,UID:e5ef628d-346f-11ea-a994-fa163e34d433,ResourceVersion:17927521,Generation:0,CreationTimestamp:2020-01-11 12:42:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 11 12:43:39.305: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-cgmqk,SelfLink:/api/v1/namespaces/e2e-tests-watch-cgmqk/configmaps/e2e-watch-test-configmap-b,UID:fdd544c4-346f-11ea-a994-fa163e34d433,ResourceVersion:17927534,Generation:0,CreationTimestamp:2020-01-11 12:43:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 11 12:43:39.306: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-cgmqk,SelfLink:/api/v1/namespaces/e2e-tests-watch-cgmqk/configmaps/e2e-watch-test-configmap-b,UID:fdd544c4-346f-11ea-a994-fa163e34d433,ResourceVersion:17927534,Generation:0,CreationTimestamp:2020-01-11 12:43:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 11 12:43:49.452: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-cgmqk,SelfLink:/api/v1/namespaces/e2e-tests-watch-cgmqk/configmaps/e2e-watch-test-configmap-b,UID:fdd544c4-346f-11ea-a994-fa163e34d433,ResourceVersion:17927547,Generation:0,CreationTimestamp:2020-01-11 12:43:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 11 12:43:49.452: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-cgmqk,SelfLink:/api/v1/namespaces/e2e-tests-watch-cgmqk/configmaps/e2e-watch-test-configmap-b,UID:fdd544c4-346f-11ea-a994-fa163e34d433,ResourceVersion:17927547,Generation:0,CreationTimestamp:2020-01-11 12:43:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:43:59.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-cgmqk" for this suite.
Jan 11 12:44:05.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:44:05.626: INFO: namespace: e2e-tests-watch-cgmqk, resource: bindings, ignored listing per whitelist
Jan 11 12:44:05.694: INFO: namespace e2e-tests-watch-cgmqk deletion completed in 6.224205186s

• [SLOW TEST:66.892 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:44:05.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan 11 12:44:06.546: INFO: created pod pod-service-account-defaultsa
Jan 11 12:44:06.546: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 11 12:44:06.637: INFO: created pod pod-service-account-mountsa
Jan 11 12:44:06.637: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 11 12:44:06.667: INFO: created pod pod-service-account-nomountsa
Jan 11 12:44:06.668: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 11 12:44:06.715: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 11 12:44:06.715: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 11 12:44:06.968: INFO: created pod pod-service-account-mountsa-mountspec
Jan 11 12:44:06.968: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 11 12:44:07.011: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 11 12:44:07.011: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 11 12:44:07.732: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 11 12:44:07.732: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 11 12:44:08.620: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 11 12:44:08.620: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 11 12:44:08.780: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 11 12:44:08.780: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:44:08.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-82blx" for this suite.
Jan 11 12:44:39.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:44:39.483: INFO: namespace: e2e-tests-svcaccounts-82blx, resource: bindings, ignored listing per whitelist
Jan 11 12:44:39.518: INFO: namespace e2e-tests-svcaccounts-82blx deletion completed in 30.632334943s

• [SLOW TEST:33.823 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:44:39.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 11 12:44:39.633: INFO: Waiting up to 5m0s for pod "pod-21cca19a-3470-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-mq2mx" to be "success or failure"
Jan 11 12:44:39.734: INFO: Pod "pod-21cca19a-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 100.485902ms
Jan 11 12:44:41.746: INFO: Pod "pod-21cca19a-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112452331s
Jan 11 12:44:43.759: INFO: Pod "pod-21cca19a-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125586237s
Jan 11 12:44:45.771: INFO: Pod "pod-21cca19a-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137897034s
Jan 11 12:44:47.785: INFO: Pod "pod-21cca19a-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152397905s
Jan 11 12:44:49.807: INFO: Pod "pod-21cca19a-3470-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.173645709s
STEP: Saw pod success
Jan 11 12:44:49.807: INFO: Pod "pod-21cca19a-3470-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:44:49.813: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-21cca19a-3470-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 12:44:50.753: INFO: Waiting for pod pod-21cca19a-3470-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:44:50.758: INFO: Pod pod-21cca19a-3470-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:44:50.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mq2mx" for this suite.
Jan 11 12:44:56.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:44:56.961: INFO: namespace: e2e-tests-emptydir-mq2mx, resource: bindings, ignored listing per whitelist
Jan 11 12:44:56.996: INFO: namespace e2e-tests-emptydir-mq2mx deletion completed in 6.230674526s

• [SLOW TEST:17.478 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:44:56.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 11 12:44:57.426: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 11 12:44:57.551: INFO: Number of nodes with available pods: 0
Jan 11 12:44:57.551: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 11 12:44:57.613: INFO: Number of nodes with available pods: 0
Jan 11 12:44:57.613: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:44:59.306: INFO: Number of nodes with available pods: 0
Jan 11 12:44:59.307: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:44:59.637: INFO: Number of nodes with available pods: 0
Jan 11 12:44:59.637: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:00.627: INFO: Number of nodes with available pods: 0
Jan 11 12:45:00.627: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:01.670: INFO: Number of nodes with available pods: 0
Jan 11 12:45:01.670: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:03.511: INFO: Number of nodes with available pods: 0
Jan 11 12:45:03.511: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:04.302: INFO: Number of nodes with available pods: 0
Jan 11 12:45:04.302: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:04.653: INFO: Number of nodes with available pods: 0
Jan 11 12:45:04.653: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:05.994: INFO: Number of nodes with available pods: 0
Jan 11 12:45:05.994: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:06.639: INFO: Number of nodes with available pods: 0
Jan 11 12:45:06.639: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:07.671: INFO: Number of nodes with available pods: 0
Jan 11 12:45:07.671: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:08.626: INFO: Number of nodes with available pods: 1
Jan 11 12:45:08.626: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 11 12:45:08.680: INFO: Number of nodes with available pods: 1
Jan 11 12:45:08.680: INFO: Number of running nodes: 0, number of available pods: 1
Jan 11 12:45:09.703: INFO: Number of nodes with available pods: 0
Jan 11 12:45:09.703: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 11 12:45:09.751: INFO: Number of nodes with available pods: 0
Jan 11 12:45:09.751: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:10.987: INFO: Number of nodes with available pods: 0
Jan 11 12:45:10.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:11.761: INFO: Number of nodes with available pods: 0
Jan 11 12:45:11.761: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:12.762: INFO: Number of nodes with available pods: 0
Jan 11 12:45:12.762: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:13.819: INFO: Number of nodes with available pods: 0
Jan 11 12:45:13.819: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:14.772: INFO: Number of nodes with available pods: 0
Jan 11 12:45:14.772: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:15.768: INFO: Number of nodes with available pods: 0
Jan 11 12:45:15.768: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:16.786: INFO: Number of nodes with available pods: 0
Jan 11 12:45:16.786: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:17.770: INFO: Number of nodes with available pods: 0
Jan 11 12:45:17.770: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:18.800: INFO: Number of nodes with available pods: 0
Jan 11 12:45:18.800: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:19.770: INFO: Number of nodes with available pods: 0
Jan 11 12:45:19.770: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:20.760: INFO: Number of nodes with available pods: 0
Jan 11 12:45:20.760: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:21.767: INFO: Number of nodes with available pods: 0
Jan 11 12:45:21.767: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:22.793: INFO: Number of nodes with available pods: 0
Jan 11 12:45:22.793: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:24.198: INFO: Number of nodes with available pods: 0
Jan 11 12:45:24.199: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:24.763: INFO: Number of nodes with available pods: 0
Jan 11 12:45:24.763: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:25.765: INFO: Number of nodes with available pods: 0
Jan 11 12:45:25.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:26.781: INFO: Number of nodes with available pods: 0
Jan 11 12:45:26.781: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:28.687: INFO: Number of nodes with available pods: 0
Jan 11 12:45:28.687: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:28.977: INFO: Number of nodes with available pods: 0
Jan 11 12:45:28.977: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:29.929: INFO: Number of nodes with available pods: 0
Jan 11 12:45:29.930: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:30.765: INFO: Number of nodes with available pods: 0
Jan 11 12:45:30.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:31.763: INFO: Number of nodes with available pods: 0
Jan 11 12:45:31.763: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:45:32.780: INFO: Number of nodes with available pods: 1
Jan 11 12:45:32.780: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-5jg87, will wait for the garbage collector to delete the pods
Jan 11 12:45:32.935: INFO: Deleting DaemonSet.extensions daemon-set took: 20.777158ms
Jan 11 12:45:33.035: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.248068ms
Jan 11 12:45:42.664: INFO: Number of nodes with available pods: 0
Jan 11 12:45:42.664: INFO: Number of running nodes: 0, number of available pods: 0
Jan 11 12:45:42.674: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-5jg87/daemonsets","resourceVersion":"17927854"},"items":null}

Jan 11 12:45:42.679: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-5jg87/pods","resourceVersion":"17927854"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:45:42.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-5jg87" for this suite.
Jan 11 12:45:50.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:45:50.945: INFO: namespace: e2e-tests-daemonsets-5jg87, resource: bindings, ignored listing per whitelist
Jan 11 12:45:50.973: INFO: namespace e2e-tests-daemonsets-5jg87 deletion completed in 8.148372411s

• [SLOW TEST:53.976 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:45:50.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan 11 12:45:51.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 11 12:45:53.009: INFO: stderr: ""
Jan 11 12:45:53.010: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:45:53.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6947j" for this suite.
Jan 11 12:45:59.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:45:59.207: INFO: namespace: e2e-tests-kubectl-6947j, resource: bindings, ignored listing per whitelist
Jan 11 12:45:59.208: INFO: namespace e2e-tests-kubectl-6947j deletion completed in 6.188900363s

• [SLOW TEST:8.235 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:45:59.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 11 12:45:59.387: INFO: Waiting up to 5m0s for pod "pod-5155f716-3470-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-nf5fh" to be "success or failure"
Jan 11 12:45:59.391: INFO: Pod "pod-5155f716-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.330808ms
Jan 11 12:46:01.421: INFO: Pod "pod-5155f716-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03421446s
Jan 11 12:46:03.436: INFO: Pod "pod-5155f716-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048585958s
Jan 11 12:46:05.961: INFO: Pod "pod-5155f716-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.573836444s
Jan 11 12:46:07.980: INFO: Pod "pod-5155f716-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593256959s
Jan 11 12:46:10.072: INFO: Pod "pod-5155f716-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.684983867s
Jan 11 12:46:12.085: INFO: Pod "pod-5155f716-3470-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.697712222s
STEP: Saw pod success
Jan 11 12:46:12.085: INFO: Pod "pod-5155f716-3470-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:46:12.090: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5155f716-3470-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 12:46:12.832: INFO: Waiting for pod pod-5155f716-3470-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:46:12.853: INFO: Pod pod-5155f716-3470-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:46:12.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nf5fh" for this suite.
Jan 11 12:46:18.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:46:19.087: INFO: namespace: e2e-tests-emptydir-nf5fh, resource: bindings, ignored listing per whitelist
Jan 11 12:46:19.162: INFO: namespace e2e-tests-emptydir-nf5fh deletion completed in 6.288051713s

• [SLOW TEST:19.954 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:46:19.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 12:46:19.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d3c2bf4-3470-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-dwxd8" to be "success or failure"
Jan 11 12:46:19.384: INFO: Pod "downwardapi-volume-5d3c2bf4-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.712613ms
Jan 11 12:46:21.890: INFO: Pod "downwardapi-volume-5d3c2bf4-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.533000692s
Jan 11 12:46:23.927: INFO: Pod "downwardapi-volume-5d3c2bf4-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.569555401s
Jan 11 12:46:26.538: INFO: Pod "downwardapi-volume-5d3c2bf4-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.181186099s
Jan 11 12:46:28.577: INFO: Pod "downwardapi-volume-5d3c2bf4-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.219691882s
Jan 11 12:46:30.598: INFO: Pod "downwardapi-volume-5d3c2bf4-3470-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.240875325s
STEP: Saw pod success
Jan 11 12:46:30.598: INFO: Pod "downwardapi-volume-5d3c2bf4-3470-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:46:30.603: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5d3c2bf4-3470-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 12:46:31.446: INFO: Waiting for pod downwardapi-volume-5d3c2bf4-3470-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:46:31.710: INFO: Pod downwardapi-volume-5d3c2bf4-3470-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:46:31.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dwxd8" for this suite.
Jan 11 12:46:37.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:46:37.817: INFO: namespace: e2e-tests-downward-api-dwxd8, resource: bindings, ignored listing per whitelist
Jan 11 12:46:37.976: INFO: namespace e2e-tests-downward-api-dwxd8 deletion completed in 6.254119007s

• [SLOW TEST:18.813 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:46:37.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-6894de45-3470-11ea-b0bd-0242ac110005
STEP: Creating secret with name s-test-opt-upd-6894df74-3470-11ea-b0bd-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-6894de45-3470-11ea-b0bd-0242ac110005
STEP: Updating secret s-test-opt-upd-6894df74-3470-11ea-b0bd-0242ac110005
STEP: Creating secret with name s-test-opt-create-6894e024-3470-11ea-b0bd-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:47:56.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mgcnm" for this suite.
Jan 11 12:48:23.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:48:23.078: INFO: namespace: e2e-tests-projected-mgcnm, resource: bindings, ignored listing per whitelist
Jan 11 12:48:23.169: INFO: namespace e2e-tests-projected-mgcnm deletion completed in 26.249522338s

• [SLOW TEST:105.193 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:48:23.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-a73517a7-3470-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 11 12:48:23.551: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a7407ca1-3470-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-ksd9m" to be "success or failure"
Jan 11 12:48:23.619: INFO: Pod "pod-projected-secrets-a7407ca1-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 68.5035ms
Jan 11 12:48:25.742: INFO: Pod "pod-projected-secrets-a7407ca1-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190975688s
Jan 11 12:48:27.754: INFO: Pod "pod-projected-secrets-a7407ca1-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203330076s
Jan 11 12:48:30.189: INFO: Pod "pod-projected-secrets-a7407ca1-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.637611921s
Jan 11 12:48:32.208: INFO: Pod "pod-projected-secrets-a7407ca1-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.657584157s
Jan 11 12:48:34.219: INFO: Pod "pod-projected-secrets-a7407ca1-3470-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.668324238s
STEP: Saw pod success
Jan 11 12:48:34.219: INFO: Pod "pod-projected-secrets-a7407ca1-3470-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:48:34.222: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a7407ca1-3470-11ea-b0bd-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 11 12:48:34.508: INFO: Waiting for pod pod-projected-secrets-a7407ca1-3470-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:48:34.750: INFO: Pod pod-projected-secrets-a7407ca1-3470-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:48:34.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ksd9m" for this suite.
Jan 11 12:48:42.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:48:42.938: INFO: namespace: e2e-tests-projected-ksd9m, resource: bindings, ignored listing per whitelist
Jan 11 12:48:42.951: INFO: namespace e2e-tests-projected-ksd9m deletion completed in 8.190446261s

• [SLOW TEST:19.782 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:48:42.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan 11 12:48:43.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:48:43.507: INFO: stderr: ""
Jan 11 12:48:43.507: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 11 12:48:43.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:48:43.679: INFO: stderr: ""
Jan 11 12:48:43.679: INFO: stdout: "update-demo-nautilus-pgwkm update-demo-nautilus-vz4c2 "
Jan 11 12:48:43.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pgwkm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:48:43.857: INFO: stderr: ""
Jan 11 12:48:43.857: INFO: stdout: ""
Jan 11 12:48:43.857: INFO: update-demo-nautilus-pgwkm is created but not running
Jan 11 12:48:48.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:48:48.999: INFO: stderr: ""
Jan 11 12:48:48.999: INFO: stdout: "update-demo-nautilus-pgwkm update-demo-nautilus-vz4c2 "
Jan 11 12:48:48.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pgwkm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:48:49.112: INFO: stderr: ""
Jan 11 12:48:49.112: INFO: stdout: ""
Jan 11 12:48:49.112: INFO: update-demo-nautilus-pgwkm is created but not running
Jan 11 12:48:54.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:48:54.258: INFO: stderr: ""
Jan 11 12:48:54.258: INFO: stdout: "update-demo-nautilus-pgwkm update-demo-nautilus-vz4c2 "
Jan 11 12:48:54.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pgwkm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:48:54.349: INFO: stderr: ""
Jan 11 12:48:54.349: INFO: stdout: ""
Jan 11 12:48:54.349: INFO: update-demo-nautilus-pgwkm is created but not running
Jan 11 12:48:59.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:48:59.549: INFO: stderr: ""
Jan 11 12:48:59.549: INFO: stdout: "update-demo-nautilus-pgwkm update-demo-nautilus-vz4c2 "
Jan 11 12:48:59.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pgwkm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:48:59.693: INFO: stderr: ""
Jan 11 12:48:59.693: INFO: stdout: "true"
Jan 11 12:48:59.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pgwkm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:48:59.806: INFO: stderr: ""
Jan 11 12:48:59.806: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 11 12:48:59.806: INFO: validating pod update-demo-nautilus-pgwkm
Jan 11 12:48:59.836: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 11 12:48:59.836: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 11 12:48:59.836: INFO: update-demo-nautilus-pgwkm is verified up and running
Jan 11 12:48:59.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vz4c2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:48:59.948: INFO: stderr: ""
Jan 11 12:48:59.948: INFO: stdout: "true"
Jan 11 12:48:59.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vz4c2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:49:00.044: INFO: stderr: ""
Jan 11 12:49:00.044: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 11 12:49:00.044: INFO: validating pod update-demo-nautilus-vz4c2
Jan 11 12:49:00.053: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 11 12:49:00.053: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 11 12:49:00.053: INFO: update-demo-nautilus-vz4c2 is verified up and running
STEP: rolling-update to new replication controller
Jan 11 12:49:00.056: INFO: scanned /root for discovery docs: 
Jan 11 12:49:00.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:49:35.742: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 11 12:49:35.742: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 11 12:49:35.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:49:35.940: INFO: stderr: ""
Jan 11 12:49:35.941: INFO: stdout: "update-demo-kitten-f7zz4 update-demo-kitten-rl2sf "
Jan 11 12:49:35.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f7zz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:49:36.100: INFO: stderr: ""
Jan 11 12:49:36.101: INFO: stdout: "true"
Jan 11 12:49:36.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f7zz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:49:36.209: INFO: stderr: ""
Jan 11 12:49:36.209: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 11 12:49:36.209: INFO: validating pod update-demo-kitten-f7zz4
Jan 11 12:49:36.227: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 11 12:49:36.227: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 11 12:49:36.227: INFO: update-demo-kitten-f7zz4 is verified up and running
Jan 11 12:49:36.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rl2sf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:49:36.314: INFO: stderr: ""
Jan 11 12:49:36.314: INFO: stdout: "true"
Jan 11 12:49:36.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rl2sf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-55j8m'
Jan 11 12:49:36.424: INFO: stderr: ""
Jan 11 12:49:36.424: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 11 12:49:36.424: INFO: validating pod update-demo-kitten-rl2sf
Jan 11 12:49:36.432: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 11 12:49:36.432: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 11 12:49:36.432: INFO: update-demo-kitten-rl2sf is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:49:36.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-55j8m" for this suite.
Jan 11 12:50:02.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:50:02.578: INFO: namespace: e2e-tests-kubectl-55j8m, resource: bindings, ignored listing per whitelist
Jan 11 12:50:02.855: INFO: namespace e2e-tests-kubectl-55j8m deletion completed in 26.418955606s

• [SLOW TEST:79.904 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:50:02.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-e28e5855-3470-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 11 12:50:03.030: INFO: Waiting up to 5m0s for pod "pod-configmaps-e28ef5cc-3470-11ea-b0bd-0242ac110005" in namespace "e2e-tests-configmap-9n8qs" to be "success or failure"
Jan 11 12:50:03.102: INFO: Pod "pod-configmaps-e28ef5cc-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 72.069617ms
Jan 11 12:50:05.137: INFO: Pod "pod-configmaps-e28ef5cc-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107670342s
Jan 11 12:50:07.154: INFO: Pod "pod-configmaps-e28ef5cc-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124107716s
Jan 11 12:50:09.483: INFO: Pod "pod-configmaps-e28ef5cc-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.453601366s
Jan 11 12:50:11.498: INFO: Pod "pod-configmaps-e28ef5cc-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.468249387s
Jan 11 12:50:13.509: INFO: Pod "pod-configmaps-e28ef5cc-3470-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.479400275s
STEP: Saw pod success
Jan 11 12:50:13.509: INFO: Pod "pod-configmaps-e28ef5cc-3470-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:50:13.513: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e28ef5cc-3470-11ea-b0bd-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 11 12:50:14.609: INFO: Waiting for pod pod-configmaps-e28ef5cc-3470-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:50:14.620: INFO: Pod pod-configmaps-e28ef5cc-3470-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:50:14.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9n8qs" for this suite.
Jan 11 12:50:20.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:50:20.917: INFO: namespace: e2e-tests-configmap-9n8qs, resource: bindings, ignored listing per whitelist
Jan 11 12:50:20.959: INFO: namespace e2e-tests-configmap-9n8qs deletion completed in 6.319841735s

• [SLOW TEST:18.102 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:50:20.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-ed54a91d-3470-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 11 12:50:21.246: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ed68d945-3470-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-jlxhk" to be "success or failure"
Jan 11 12:50:21.293: INFO: Pod "pod-projected-configmaps-ed68d945-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.03327ms
Jan 11 12:50:23.756: INFO: Pod "pod-projected-configmaps-ed68d945-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.510453364s
Jan 11 12:50:25.766: INFO: Pod "pod-projected-configmaps-ed68d945-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.520044251s
Jan 11 12:50:27.840: INFO: Pod "pod-projected-configmaps-ed68d945-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.59363332s
Jan 11 12:50:30.046: INFO: Pod "pod-projected-configmaps-ed68d945-3470-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.800487811s
Jan 11 12:50:32.068: INFO: Pod "pod-projected-configmaps-ed68d945-3470-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.822107576s
STEP: Saw pod success
Jan 11 12:50:32.068: INFO: Pod "pod-projected-configmaps-ed68d945-3470-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:50:32.084: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ed68d945-3470-11ea-b0bd-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 11 12:50:34.223: INFO: Waiting for pod pod-projected-configmaps-ed68d945-3470-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:50:34.539: INFO: Pod pod-projected-configmaps-ed68d945-3470-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:50:34.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jlxhk" for this suite.
Jan 11 12:50:42.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:50:42.984: INFO: namespace: e2e-tests-projected-jlxhk, resource: bindings, ignored listing per whitelist
Jan 11 12:50:43.061: INFO: namespace e2e-tests-projected-jlxhk deletion completed in 8.443723257s

• [SLOW TEST:22.096 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:50:43.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 11 12:50:43.178: INFO: PodSpec: initContainers in spec.initContainers
Jan 11 12:51:53.677: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fa7e4d41-3470-11ea-b0bd-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-7mh5p", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-7mh5p/pods/pod-init-fa7e4d41-3470-11ea-b0bd-0242ac110005", UID:"fa82c3bf-3470-11ea-a994-fa163e34d433", ResourceVersion:"17928641", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714343843, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"178439908"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4rnsp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00277c1c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4rnsp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4rnsp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4rnsp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00283e7a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002834180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00283e8d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00283e900)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00283e908), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00283e90c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343843, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343843, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343843, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714343843, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0023e60a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002642070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026420e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://1ec00498336ee4518679809e92984df55c32b50c8de1ee29412357baceef33e2"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0023e60e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0023e60c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:51:53.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-7mh5p" for this suite.
Jan 11 12:52:17.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:52:18.089: INFO: namespace: e2e-tests-init-container-7mh5p, resource: bindings, ignored listing per whitelist
Jan 11 12:52:18.153: INFO: namespace e2e-tests-init-container-7mh5p deletion completed in 24.424495647s

• [SLOW TEST:95.091 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:52:18.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-333e1478-3471-11ea-b0bd-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-333e14eb-3471-11ea-b0bd-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-333e1478-3471-11ea-b0bd-0242ac110005
STEP: Updating configmap cm-test-opt-upd-333e14eb-3471-11ea-b0bd-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-333e152d-3471-11ea-b0bd-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:52:36.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-chcmd" for this suite.
Jan 11 12:53:00.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:53:01.259: INFO: namespace: e2e-tests-configmap-chcmd, resource: bindings, ignored listing per whitelist
Jan 11 12:53:01.306: INFO: namespace e2e-tests-configmap-chcmd deletion completed in 24.367618183s

• [SLOW TEST:43.153 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:53:01.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 11 12:53:01.547: INFO: Waiting up to 5m0s for pod "downward-api-4cf32d89-3471-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-5mrsz" to be "success or failure"
Jan 11 12:53:01.636: INFO: Pod "downward-api-4cf32d89-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 88.524352ms
Jan 11 12:53:03.668: INFO: Pod "downward-api-4cf32d89-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121274702s
Jan 11 12:53:05.680: INFO: Pod "downward-api-4cf32d89-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133031038s
Jan 11 12:53:08.025: INFO: Pod "downward-api-4cf32d89-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.478031648s
Jan 11 12:53:10.039: INFO: Pod "downward-api-4cf32d89-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.492256702s
Jan 11 12:53:12.052: INFO: Pod "downward-api-4cf32d89-3471-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.505280687s
STEP: Saw pod success
Jan 11 12:53:12.052: INFO: Pod "downward-api-4cf32d89-3471-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:53:12.057: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-4cf32d89-3471-11ea-b0bd-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 11 12:53:12.727: INFO: Waiting for pod downward-api-4cf32d89-3471-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:53:12.739: INFO: Pod downward-api-4cf32d89-3471-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:53:12.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5mrsz" for this suite.
Jan 11 12:53:18.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:53:18.957: INFO: namespace: e2e-tests-downward-api-5mrsz, resource: bindings, ignored listing per whitelist
Jan 11 12:53:19.108: INFO: namespace e2e-tests-downward-api-5mrsz deletion completed in 6.360905004s

• [SLOW TEST:17.801 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:53:19.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-6dkq7/configmap-test-57a85369-3471-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 11 12:53:19.583: INFO: Waiting up to 5m0s for pod "pod-configmaps-57ae911d-3471-11ea-b0bd-0242ac110005" in namespace "e2e-tests-configmap-6dkq7" to be "success or failure"
Jan 11 12:53:19.772: INFO: Pod "pod-configmaps-57ae911d-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 188.717791ms
Jan 11 12:53:21.787: INFO: Pod "pod-configmaps-57ae911d-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203865974s
Jan 11 12:53:23.815: INFO: Pod "pod-configmaps-57ae911d-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232056148s
Jan 11 12:53:25.847: INFO: Pod "pod-configmaps-57ae911d-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26381836s
Jan 11 12:53:28.417: INFO: Pod "pod-configmaps-57ae911d-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.834127598s
Jan 11 12:53:30.611: INFO: Pod "pod-configmaps-57ae911d-3471-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.02809567s
STEP: Saw pod success
Jan 11 12:53:30.612: INFO: Pod "pod-configmaps-57ae911d-3471-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:53:30.645: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-57ae911d-3471-11ea-b0bd-0242ac110005 container env-test: 
STEP: delete the pod
Jan 11 12:53:30.763: INFO: Waiting for pod pod-configmaps-57ae911d-3471-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:53:30.779: INFO: Pod pod-configmaps-57ae911d-3471-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:53:30.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6dkq7" for this suite.
Jan 11 12:53:38.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:53:38.925: INFO: namespace: e2e-tests-configmap-6dkq7, resource: bindings, ignored listing per whitelist
Jan 11 12:53:39.034: INFO: namespace e2e-tests-configmap-6dkq7 deletion completed in 8.244544449s

• [SLOW TEST:19.925 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:53:39.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 11 12:53:39.319: INFO: Waiting up to 5m0s for pod "pod-6377a843-3471-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-h22dx" to be "success or failure"
Jan 11 12:53:39.434: INFO: Pod "pod-6377a843-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 114.728617ms
Jan 11 12:53:41.456: INFO: Pod "pod-6377a843-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136869045s
Jan 11 12:53:43.488: INFO: Pod "pod-6377a843-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168482489s
Jan 11 12:53:45.504: INFO: Pod "pod-6377a843-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184349743s
Jan 11 12:53:47.530: INFO: Pod "pod-6377a843-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210549053s
Jan 11 12:53:49.540: INFO: Pod "pod-6377a843-3471-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.221138815s
STEP: Saw pod success
Jan 11 12:53:49.540: INFO: Pod "pod-6377a843-3471-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:53:49.549: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6377a843-3471-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 12:53:50.374: INFO: Waiting for pod pod-6377a843-3471-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:53:50.418: INFO: Pod pod-6377a843-3471-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:53:50.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-h22dx" for this suite.
Jan 11 12:53:56.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:53:56.907: INFO: namespace: e2e-tests-emptydir-h22dx, resource: bindings, ignored listing per whitelist
Jan 11 12:53:56.952: INFO: namespace e2e-tests-emptydir-h22dx deletion completed in 6.224302035s

• [SLOW TEST:17.918 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:53:56.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 11 12:53:57.130: INFO: Waiting up to 5m0s for pod "pod-6e15d4d9-3471-11ea-b0bd-0242ac110005" in namespace "e2e-tests-emptydir-pfpxh" to be "success or failure"
Jan 11 12:53:57.140: INFO: Pod "pod-6e15d4d9-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.034242ms
Jan 11 12:54:00.158: INFO: Pod "pod-6e15d4d9-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.028378656s
Jan 11 12:54:02.178: INFO: Pod "pod-6e15d4d9-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.047479335s
Jan 11 12:54:04.540: INFO: Pod "pod-6e15d4d9-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.409866496s
Jan 11 12:54:06.856: INFO: Pod "pod-6e15d4d9-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.725998638s
Jan 11 12:54:08.874: INFO: Pod "pod-6e15d4d9-3471-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.744139624s
STEP: Saw pod success
Jan 11 12:54:08.874: INFO: Pod "pod-6e15d4d9-3471-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:54:08.879: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6e15d4d9-3471-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 12:54:09.050: INFO: Waiting for pod pod-6e15d4d9-3471-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:54:09.071: INFO: Pod pod-6e15d4d9-3471-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:54:09.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pfpxh" for this suite.
Jan 11 12:54:15.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:54:15.287: INFO: namespace: e2e-tests-emptydir-pfpxh, resource: bindings, ignored listing per whitelist
Jan 11 12:54:15.437: INFO: namespace e2e-tests-emptydir-pfpxh deletion completed in 6.349388848s

• [SLOW TEST:18.484 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:54:15.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-79141503-3471-11ea-b0bd-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 11 12:54:15.660: INFO: Waiting up to 5m0s for pod "pod-secrets-7914c04b-3471-11ea-b0bd-0242ac110005" in namespace "e2e-tests-secrets-6kwlp" to be "success or failure"
Jan 11 12:54:15.673: INFO: Pod "pod-secrets-7914c04b-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.23228ms
Jan 11 12:54:17.922: INFO: Pod "pod-secrets-7914c04b-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261938658s
Jan 11 12:54:19.937: INFO: Pod "pod-secrets-7914c04b-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.276410244s
Jan 11 12:54:22.580: INFO: Pod "pod-secrets-7914c04b-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.919659701s
Jan 11 12:54:24.591: INFO: Pod "pod-secrets-7914c04b-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.930494845s
Jan 11 12:54:26.632: INFO: Pod "pod-secrets-7914c04b-3471-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.971746722s
STEP: Saw pod success
Jan 11 12:54:26.632: INFO: Pod "pod-secrets-7914c04b-3471-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:54:26.681: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7914c04b-3471-11ea-b0bd-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 11 12:54:26.838: INFO: Waiting for pod pod-secrets-7914c04b-3471-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:54:27.030: INFO: Pod pod-secrets-7914c04b-3471-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:54:27.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6kwlp" for this suite.
Jan 11 12:54:33.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:54:33.144: INFO: namespace: e2e-tests-secrets-6kwlp, resource: bindings, ignored listing per whitelist
Jan 11 12:54:33.355: INFO: namespace e2e-tests-secrets-6kwlp deletion completed in 6.313595797s

• [SLOW TEST:17.918 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:54:33.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 11 12:54:33.544: INFO: Waiting up to 5m0s for pod "downward-api-83ca6f90-3471-11ea-b0bd-0242ac110005" in namespace "e2e-tests-downward-api-8xnzm" to be "success or failure"
Jan 11 12:54:33.706: INFO: Pod "downward-api-83ca6f90-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 161.220714ms
Jan 11 12:54:35.768: INFO: Pod "downward-api-83ca6f90-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22396542s
Jan 11 12:54:37.787: INFO: Pod "downward-api-83ca6f90-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242089735s
Jan 11 12:54:39.976: INFO: Pod "downward-api-83ca6f90-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431828922s
Jan 11 12:54:41.988: INFO: Pod "downward-api-83ca6f90-3471-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.443708419s
Jan 11 12:54:44.007: INFO: Pod "downward-api-83ca6f90-3471-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.462409979s
STEP: Saw pod success
Jan 11 12:54:44.007: INFO: Pod "downward-api-83ca6f90-3471-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:54:44.011: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-83ca6f90-3471-11ea-b0bd-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 11 12:54:45.220: INFO: Waiting for pod downward-api-83ca6f90-3471-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:54:45.280: INFO: Pod downward-api-83ca6f90-3471-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:54:45.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8xnzm" for this suite.
Jan 11 12:54:51.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:54:51.369: INFO: namespace: e2e-tests-downward-api-8xnzm, resource: bindings, ignored listing per whitelist
Jan 11 12:54:51.576: INFO: namespace e2e-tests-downward-api-8xnzm deletion completed in 6.289060218s

• [SLOW TEST:18.221 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:54:51.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-xp8g7
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan 11 12:54:52.000: INFO: Found 0 stateful pods, waiting for 3
Jan 11 12:55:02.025: INFO: Found 2 stateful pods, waiting for 3
Jan 11 12:55:12.072: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 12:55:12.072: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 12:55:12.072: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 11 12:55:22.032: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 12:55:22.032: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 12:55:22.032: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 11 12:55:22.089: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 11 12:55:32.329: INFO: Updating stateful set ss2
Jan 11 12:55:32.382: INFO: Waiting for Pod e2e-tests-statefulset-xp8g7/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 11 12:55:42.859: INFO: Found 2 stateful pods, waiting for 3
Jan 11 12:55:52.882: INFO: Found 2 stateful pods, waiting for 3
Jan 11 12:56:03.358: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 12:56:03.358: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 12:56:03.358: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 11 12:56:12.875: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 12:56:12.875: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 11 12:56:12.875: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 11 12:56:12.907: INFO: Updating stateful set ss2
Jan 11 12:56:13.025: INFO: Waiting for Pod e2e-tests-statefulset-xp8g7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 11 12:56:23.188: INFO: Updating stateful set ss2
Jan 11 12:56:23.255: INFO: Waiting for StatefulSet e2e-tests-statefulset-xp8g7/ss2 to complete update
Jan 11 12:56:23.255: INFO: Waiting for Pod e2e-tests-statefulset-xp8g7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 11 12:56:33.377: INFO: Waiting for StatefulSet e2e-tests-statefulset-xp8g7/ss2 to complete update
Jan 11 12:56:33.377: INFO: Waiting for Pod e2e-tests-statefulset-xp8g7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 11 12:56:43.343: INFO: Waiting for StatefulSet e2e-tests-statefulset-xp8g7/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 11 12:56:53.277: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xp8g7
Jan 11 12:56:53.281: INFO: Scaling statefulset ss2 to 0
Jan 11 12:57:23.325: INFO: Waiting for statefulset status.replicas updated to 0
Jan 11 12:57:23.332: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:57:23.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-xp8g7" for this suite.
Jan 11 12:57:31.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:57:31.612: INFO: namespace: e2e-tests-statefulset-xp8g7, resource: bindings, ignored listing per whitelist
Jan 11 12:57:31.689: INFO: namespace e2e-tests-statefulset-xp8g7 deletion completed in 8.31531695s

• [SLOW TEST:160.113 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:57:31.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-jnbwt
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-jnbwt to expose endpoints map[]
Jan 11 12:57:32.006: INFO: Get endpoints failed (17.259704ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 11 12:57:33.021: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-jnbwt exposes endpoints map[] (1.031637675s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-jnbwt
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-jnbwt to expose endpoints map[pod1:[100]]
Jan 11 12:57:37.250: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.209603057s elapsed, will retry)
Jan 11 12:57:43.175: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-jnbwt exposes endpoints map[pod1:[100]] (10.135094362s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-jnbwt
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-jnbwt to expose endpoints map[pod1:[100] pod2:[101]]
Jan 11 12:57:47.547: INFO: Unexpected endpoints: found map[eec8605d-3471-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.247174136s elapsed, will retry)
Jan 11 12:57:53.420: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-jnbwt exposes endpoints map[pod2:[101] pod1:[100]] (10.120013994s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-jnbwt
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-jnbwt to expose endpoints map[pod2:[101]]
Jan 11 12:57:54.652: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-jnbwt exposes endpoints map[pod2:[101]] (1.216215429s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-jnbwt
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-jnbwt to expose endpoints map[]
Jan 11 12:57:55.978: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-jnbwt exposes endpoints map[] (1.271844052s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:57:57.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-jnbwt" for this suite.
Jan 11 12:58:21.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:58:21.634: INFO: namespace: e2e-tests-services-jnbwt, resource: bindings, ignored listing per whitelist
Jan 11 12:58:21.644: INFO: namespace e2e-tests-services-jnbwt deletion completed in 24.549770249s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:49.955 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:58:21.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan 11 12:58:21.945: INFO: Waiting up to 5m0s for pod "client-containers-0be9360a-3472-11ea-b0bd-0242ac110005" in namespace "e2e-tests-containers-7lbt2" to be "success or failure"
Jan 11 12:58:22.070: INFO: Pod "client-containers-0be9360a-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 124.614407ms
Jan 11 12:58:24.111: INFO: Pod "client-containers-0be9360a-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165965947s
Jan 11 12:58:26.124: INFO: Pod "client-containers-0be9360a-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179069681s
Jan 11 12:58:28.565: INFO: Pod "client-containers-0be9360a-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.619731587s
Jan 11 12:58:30.589: INFO: Pod "client-containers-0be9360a-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.643480078s
Jan 11 12:58:32.602: INFO: Pod "client-containers-0be9360a-3472-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.65639603s
STEP: Saw pod success
Jan 11 12:58:32.602: INFO: Pod "client-containers-0be9360a-3472-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 12:58:32.606: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-0be9360a-3472-11ea-b0bd-0242ac110005 container test-container: 
STEP: delete the pod
Jan 11 12:58:33.705: INFO: Waiting for pod client-containers-0be9360a-3472-11ea-b0bd-0242ac110005 to disappear
Jan 11 12:58:33.725: INFO: Pod client-containers-0be9360a-3472-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:58:33.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-7lbt2" for this suite.
Jan 11 12:58:39.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:58:39.898: INFO: namespace: e2e-tests-containers-7lbt2, resource: bindings, ignored listing per whitelist
Jan 11 12:58:39.974: INFO: namespace e2e-tests-containers-7lbt2 deletion completed in 6.235383417s

• [SLOW TEST:18.330 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:58:39.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 11 12:58:40.385: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 11 12:58:40.413: INFO: Number of nodes with available pods: 0
Jan 11 12:58:40.413: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:58:41.433: INFO: Number of nodes with available pods: 0
Jan 11 12:58:41.433: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:58:42.619: INFO: Number of nodes with available pods: 0
Jan 11 12:58:42.620: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:58:43.445: INFO: Number of nodes with available pods: 0
Jan 11 12:58:43.445: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:58:44.428: INFO: Number of nodes with available pods: 0
Jan 11 12:58:44.428: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:58:45.436: INFO: Number of nodes with available pods: 0
Jan 11 12:58:45.436: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:58:46.437: INFO: Number of nodes with available pods: 0
Jan 11 12:58:46.437: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:58:47.434: INFO: Number of nodes with available pods: 0
Jan 11 12:58:47.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:58:48.435: INFO: Number of nodes with available pods: 0
Jan 11 12:58:48.435: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:58:49.435: INFO: Number of nodes with available pods: 1
Jan 11 12:58:49.435: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 11 12:58:49.522: INFO: Wrong image for pod: daemon-set-bx8lz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 11 12:58:50.603: INFO: Wrong image for pod: daemon-set-bx8lz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 11 12:58:51.584: INFO: Wrong image for pod: daemon-set-bx8lz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 11 12:58:52.625: INFO: Wrong image for pod: daemon-set-bx8lz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 11 12:58:53.569: INFO: Wrong image for pod: daemon-set-bx8lz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 11 12:58:54.590: INFO: Wrong image for pod: daemon-set-bx8lz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 11 12:58:55.570: INFO: Wrong image for pod: daemon-set-bx8lz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 11 12:58:56.591: INFO: Wrong image for pod: daemon-set-bx8lz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 11 12:58:56.591: INFO: Pod daemon-set-bx8lz is not available
Jan 11 12:58:57.571: INFO: Pod daemon-set-wjjq5 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 11 12:58:57.602: INFO: Number of nodes with available pods: 0
Jan 11 12:58:57.602: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:58:58.808: INFO: Number of nodes with available pods: 0
Jan 11 12:58:58.808: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:58:59.896: INFO: Number of nodes with available pods: 0
Jan 11 12:58:59.896: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:59:00.642: INFO: Number of nodes with available pods: 0
Jan 11 12:59:00.642: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:59:01.636: INFO: Number of nodes with available pods: 0
Jan 11 12:59:01.636: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:59:03.862: INFO: Number of nodes with available pods: 0
Jan 11 12:59:03.863: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:59:04.618: INFO: Number of nodes with available pods: 0
Jan 11 12:59:04.618: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:59:05.626: INFO: Number of nodes with available pods: 0
Jan 11 12:59:05.626: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 11 12:59:06.683: INFO: Number of nodes with available pods: 1
Jan 11 12:59:06.683: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-p67lv, will wait for the garbage collector to delete the pods
Jan 11 12:59:06.822: INFO: Deleting DaemonSet.extensions daemon-set took: 12.104272ms
Jan 11 12:59:06.922: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.365173ms
Jan 11 12:59:22.799: INFO: Number of nodes with available pods: 0
Jan 11 12:59:22.799: INFO: Number of running nodes: 0, number of available pods: 0
Jan 11 12:59:22.806: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-p67lv/daemonsets","resourceVersion":"17929747"},"items":null}

Jan 11 12:59:22.809: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-p67lv/pods","resourceVersion":"17929747"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:59:22.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-p67lv" for this suite.
Jan 11 12:59:28.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:59:29.009: INFO: namespace: e2e-tests-daemonsets-p67lv, resource: bindings, ignored listing per whitelist
Jan 11 12:59:29.016: INFO: namespace e2e-tests-daemonsets-p67lv deletion completed in 6.19342107s

• [SLOW TEST:49.042 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:59:29.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 11 12:59:29.295: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 11 12:59:34.303: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:59:35.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-k9tjm" for this suite.
Jan 11 12:59:44.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 12:59:45.446: INFO: namespace: e2e-tests-replication-controller-k9tjm, resource: bindings, ignored listing per whitelist
Jan 11 12:59:45.498: INFO: namespace e2e-tests-replication-controller-k9tjm deletion completed in 9.705826479s

• [SLOW TEST:16.482 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 12:59:45.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 11 12:59:45.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 12:59:56.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-d45l6" for this suite.
Jan 11 13:00:44.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 13:00:44.604: INFO: namespace: e2e-tests-pods-d45l6, resource: bindings, ignored listing per whitelist
Jan 11 13:00:44.667: INFO: namespace e2e-tests-pods-d45l6 deletion completed in 48.227415598s

• [SLOW TEST:59.167 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 13:00:44.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0111 13:01:25.660371       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 11 13:01:25.660: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 13:01:25.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-mhjjm" for this suite.
Jan 11 13:01:40.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 13:01:41.103: INFO: namespace: e2e-tests-gc-mhjjm, resource: bindings, ignored listing per whitelist
Jan 11 13:01:41.175: INFO: namespace e2e-tests-gc-mhjjm deletion completed in 15.508549011s

• [SLOW TEST:56.508 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 13:01:41.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 11 13:02:03.494: INFO: Successfully updated pod "pod-update-activedeadlineseconds-84835e58-3472-11ea-b0bd-0242ac110005"
Jan 11 13:02:03.494: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-84835e58-3472-11ea-b0bd-0242ac110005" in namespace "e2e-tests-pods-lj7mx" to be "terminated due to deadline exceeded"
Jan 11 13:02:03.503: INFO: Pod "pod-update-activedeadlineseconds-84835e58-3472-11ea-b0bd-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.933905ms
Jan 11 13:02:05.582: INFO: Pod "pod-update-activedeadlineseconds-84835e58-3472-11ea-b0bd-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.087109919s
Jan 11 13:02:05.582: INFO: Pod "pod-update-activedeadlineseconds-84835e58-3472-11ea-b0bd-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 13:02:05.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-lj7mx" for this suite.
Jan 11 13:02:11.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 13:02:11.731: INFO: namespace: e2e-tests-pods-lj7mx, resource: bindings, ignored listing per whitelist
Jan 11 13:02:11.854: INFO: namespace e2e-tests-pods-lj7mx deletion completed in 6.260354252s

• [SLOW TEST:30.678 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 13:02:11.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 13:02:12.118: INFO: Waiting up to 5m0s for pod "downwardapi-volume-951cf539-3472-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-qbf8j" to be "success or failure"
Jan 11 13:02:12.181: INFO: Pod "downwardapi-volume-951cf539-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 62.664437ms
Jan 11 13:02:14.311: INFO: Pod "downwardapi-volume-951cf539-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193419611s
Jan 11 13:02:16.334: INFO: Pod "downwardapi-volume-951cf539-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216132697s
Jan 11 13:02:18.812: INFO: Pod "downwardapi-volume-951cf539-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.694284461s
Jan 11 13:02:20.856: INFO: Pod "downwardapi-volume-951cf539-3472-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.738255542s
STEP: Saw pod success
Jan 11 13:02:20.856: INFO: Pod "downwardapi-volume-951cf539-3472-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 13:02:20.873: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-951cf539-3472-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 13:02:20.962: INFO: Waiting for pod downwardapi-volume-951cf539-3472-11ea-b0bd-0242ac110005 to disappear
Jan 11 13:02:20.979: INFO: Pod downwardapi-volume-951cf539-3472-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 13:02:20.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qbf8j" for this suite.
Jan 11 13:02:27.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 13:02:27.251: INFO: namespace: e2e-tests-projected-qbf8j, resource: bindings, ignored listing per whitelist
Jan 11 13:02:27.254: INFO: namespace e2e-tests-projected-qbf8j deletion completed in 6.264012062s

• [SLOW TEST:15.399 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 13:02:27.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan 11 13:02:27.412: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix258937705/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 13:02:27.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2h92j" for this suite.
Jan 11 13:02:33.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 13:02:33.773: INFO: namespace: e2e-tests-kubectl-2h92j, resource: bindings, ignored listing per whitelist
Jan 11 13:02:33.773: INFO: namespace e2e-tests-kubectl-2h92j deletion completed in 6.262591727s

• [SLOW TEST:6.519 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 13:02:33.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0111 13:02:47.870099       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 11 13:02:47.870: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 13:02:47.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-k7dx6" for this suite.
Jan 11 13:03:18.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 13:03:18.279: INFO: namespace: e2e-tests-gc-k7dx6, resource: bindings, ignored listing per whitelist
Jan 11 13:03:18.399: INFO: namespace e2e-tests-gc-k7dx6 deletion completed in 30.442487576s

• [SLOW TEST:44.626 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 13:03:18.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan 11 13:03:20.702: INFO: Waiting up to 5m0s for pod "var-expansion-bdf2997e-3472-11ea-b0bd-0242ac110005" in namespace "e2e-tests-var-expansion-7vrs8" to be "success or failure"
Jan 11 13:03:20.723: INFO: Pod "var-expansion-bdf2997e-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.065499ms
Jan 11 13:03:22.891: INFO: Pod "var-expansion-bdf2997e-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188938623s
Jan 11 13:03:24.915: INFO: Pod "var-expansion-bdf2997e-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213414807s
Jan 11 13:03:27.380: INFO: Pod "var-expansion-bdf2997e-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.677935932s
Jan 11 13:03:29.420: INFO: Pod "var-expansion-bdf2997e-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.717601122s
Jan 11 13:03:31.439: INFO: Pod "var-expansion-bdf2997e-3472-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.737153337s
STEP: Saw pod success
Jan 11 13:03:31.439: INFO: Pod "var-expansion-bdf2997e-3472-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 13:03:31.448: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-bdf2997e-3472-11ea-b0bd-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 11 13:03:31.696: INFO: Waiting for pod var-expansion-bdf2997e-3472-11ea-b0bd-0242ac110005 to disappear
Jan 11 13:03:31.710: INFO: Pod var-expansion-bdf2997e-3472-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 13:03:31.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-7vrs8" for this suite.
Jan 11 13:03:39.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 13:03:40.020: INFO: namespace: e2e-tests-var-expansion-7vrs8, resource: bindings, ignored listing per whitelist
Jan 11 13:03:40.084: INFO: namespace e2e-tests-var-expansion-7vrs8 deletion completed in 8.363692667s

• [SLOW TEST:21.683 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 13:03:40.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 11 13:03:40.376: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9b2c9cc-3472-11ea-b0bd-0242ac110005" in namespace "e2e-tests-projected-gb8wh" to be "success or failure"
Jan 11 13:03:40.405: INFO: Pod "downwardapi-volume-c9b2c9cc-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.59133ms
Jan 11 13:03:42.836: INFO: Pod "downwardapi-volume-c9b2c9cc-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.459579947s
Jan 11 13:03:44.865: INFO: Pod "downwardapi-volume-c9b2c9cc-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488871397s
Jan 11 13:03:47.305: INFO: Pod "downwardapi-volume-c9b2c9cc-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.928707815s
Jan 11 13:03:49.320: INFO: Pod "downwardapi-volume-c9b2c9cc-3472-11ea-b0bd-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.944074876s
Jan 11 13:03:51.329: INFO: Pod "downwardapi-volume-c9b2c9cc-3472-11ea-b0bd-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.952759887s
STEP: Saw pod success
Jan 11 13:03:51.329: INFO: Pod "downwardapi-volume-c9b2c9cc-3472-11ea-b0bd-0242ac110005" satisfied condition "success or failure"
Jan 11 13:03:51.332: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c9b2c9cc-3472-11ea-b0bd-0242ac110005 container client-container: 
STEP: delete the pod
Jan 11 13:03:52.362: INFO: Waiting for pod downwardapi-volume-c9b2c9cc-3472-11ea-b0bd-0242ac110005 to disappear
Jan 11 13:03:52.396: INFO: Pod downwardapi-volume-c9b2c9cc-3472-11ea-b0bd-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 13:03:52.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gb8wh" for this suite.
Jan 11 13:03:58.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 13:03:58.762: INFO: namespace: e2e-tests-projected-gb8wh, resource: bindings, ignored listing per whitelist
Jan 11 13:03:58.765: INFO: namespace e2e-tests-projected-gb8wh deletion completed in 6.349730367s

• [SLOW TEST:18.681 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 11 13:03:58.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 11 13:03:58.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vn2wt'
Jan 11 13:04:01.206: INFO: stderr: ""
Jan 11 13:04:01.206: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 11 13:04:11.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vn2wt -o json'
Jan 11 13:04:11.470: INFO: stderr: ""
Jan 11 13:04:11.470: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-11T13:04:01Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-vn2wt\",\n        \"resourceVersion\": \"17930572\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-vn2wt/pods/e2e-test-nginx-pod\",\n        \"uid\": \"d6220618-3472-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-9sv2f\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-9sv2f\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-9sv2f\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-11T13:04:01Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-11T13:04:10Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-11T13:04:10Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-11T13:04:01Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://18de9b41f2ee306d13ad70f83adfe24b1271a2e611148ab3242cf547840991ec\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-11T13:04:09Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-11T13:04:01Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 11 13:04:11.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-vn2wt'
Jan 11 13:04:11.815: INFO: stderr: ""
Jan 11 13:04:11.815: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan 11 13:04:11.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vn2wt'
Jan 11 13:04:18.957: INFO: stderr: ""
Jan 11 13:04:18.957: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 11 13:04:18.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vn2wt" for this suite.
Jan 11 13:04:25.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 11 13:04:25.161: INFO: namespace: e2e-tests-kubectl-vn2wt, resource: bindings, ignored listing per whitelist
Jan 11 13:04:25.295: INFO: namespace e2e-tests-kubectl-vn2wt deletion completed in 6.321201s

• [SLOW TEST:26.529 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSJan 11 13:04:25.295: INFO: Running AfterSuite actions on all nodes
Jan 11 13:04:25.295: INFO: Running AfterSuite actions on node 1
Jan 11 13:04:25.295: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8236.365 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS