I0220 10:47:03.514097 8 e2e.go:224] Starting e2e run "5442c6b0-53ce-11ea-bcb7-0242ac110008" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582195622 - Will randomize all specs Will run 201 of 2164 specs Feb 20 10:47:03.705: INFO: >>> kubeConfig: /root/.kube/config Feb 20 10:47:03.710: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 20 10:47:03.735: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 20 10:47:03.835: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 20 10:47:03.836: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 20 10:47:03.836: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 20 10:47:03.889: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 20 10:47:03.889: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 20 10:47:03.889: INFO: e2e test version: v1.13.12 Feb 20 10:47:03.894: INFO: kube-apiserver version: v1.13.8 SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:47:03.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment Feb 20 10:47:04.163: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 10:47:04.171: INFO: Creating deployment "test-recreate-deployment" Feb 20 10:47:04.204: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 20 10:47:04.333: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Feb 20 10:47:06.801: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 20 10:47:06.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 10:47:08.905: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 10:47:11.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 10:47:13.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717792424, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 10:47:14.829: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 20 10:47:14.844: INFO: Updating deployment test-recreate-deployment Feb 20 10:47:14.844: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 20 10:47:15.400: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-rg7mw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rg7mw/deployments/test-recreate-deployment,UID:54f4245e-53ce-11ea-a994-fa163e34d433,ResourceVersion:22298307,Generation:2,CreationTimestamp:2020-02-20 10:47:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-20 10:47:15 +0000 UTC 2020-02-20 10:47:15 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-20 10:47:15 +0000 UTC 2020-02-20 10:47:04 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 20 10:47:15.556: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-rg7mw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rg7mw/replicasets/test-recreate-deployment-589c4bfd,UID:5b700796-53ce-11ea-a994-fa163e34d433,ResourceVersion:22298304,Generation:1,CreationTimestamp:2020-02-20 10:47:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 54f4245e-53ce-11ea-a994-fa163e34d433 0xc000f88a0f 0xc000f88a20}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 10:47:15.556: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 20 10:47:15.556: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-rg7mw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rg7mw/replicasets/test-recreate-deployment-5bf7f65dc,UID:550c85a9-53ce-11ea-a994-fa163e34d433,ResourceVersion:22298296,Generation:2,CreationTimestamp:2020-02-20 10:47:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 54f4245e-53ce-11ea-a994-fa163e34d433 0xc000f88ae0 0xc000f88ae1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 10:47:15.578: INFO: Pod "test-recreate-deployment-589c4bfd-67f49" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-67f49,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-rg7mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rg7mw/pods/test-recreate-deployment-589c4bfd-67f49,UID:5b7d41c8-53ce-11ea-a994-fa163e34d433,ResourceVersion:22298308,Generation:0,CreationTimestamp:2020-02-20 10:47:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 5b700796-53ce-11ea-a994-fa163e34d433 0xc000f8939f 0xc000f893b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dlbwb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dlbwb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dlbwb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000f89410} {node.kubernetes.io/unreachable Exists NoExecute 0xc000f89430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 10:47:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 10:47:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 10:47:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 10:47:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-20 10:47:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:47:15.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-rg7mw" for this suite. Feb 20 10:47:24.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:47:24.903: INFO: namespace: e2e-tests-deployment-rg7mw, resource: bindings, ignored listing per whitelist Feb 20 10:47:25.195: INFO: namespace e2e-tests-deployment-rg7mw deletion completed in 9.608180151s • [SLOW TEST:21.301 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:47:25.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Feb 20 10:47:25.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:47:27.491: INFO: stderr: "" Feb 20 10:47:27.491: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 10:47:27.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:47:27.854: INFO: stderr: "" Feb 20 10:47:27.854: INFO: stdout: "update-demo-nautilus-gpckz update-demo-nautilus-wjs5b " Feb 20 10:47:27.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpckz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:47:28.085: INFO: stderr: "" Feb 20 10:47:28.085: INFO: stdout: "" Feb 20 10:47:28.085: INFO: update-demo-nautilus-gpckz is created but not running Feb 20 10:47:33.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:47:33.178: INFO: stderr: "" Feb 20 10:47:33.178: INFO: stdout: "update-demo-nautilus-gpckz update-demo-nautilus-wjs5b " Feb 20 10:47:33.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpckz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:47:33.284: INFO: stderr: "" Feb 20 10:47:33.284: INFO: stdout: "" Feb 20 10:47:33.284: INFO: update-demo-nautilus-gpckz is created but not running Feb 20 10:47:38.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:47:38.510: INFO: stderr: "" Feb 20 10:47:38.510: INFO: stdout: "update-demo-nautilus-gpckz update-demo-nautilus-wjs5b " Feb 20 10:47:38.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpckz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:47:38.727: INFO: stderr: "" Feb 20 10:47:38.728: INFO: stdout: "true" Feb 20 10:47:38.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpckz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:47:38.867: INFO: stderr: "" Feb 20 10:47:38.867: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 10:47:38.867: INFO: validating pod update-demo-nautilus-gpckz Feb 20 10:47:38.925: INFO: got data: { "image": "nautilus.jpg" } Feb 20 10:47:38.925: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 10:47:38.925: INFO: update-demo-nautilus-gpckz is verified up and running Feb 20 10:47:38.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjs5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:47:39.243: INFO: stderr: "" Feb 20 10:47:39.243: INFO: stdout: "true" Feb 20 10:47:39.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjs5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:47:39.349: INFO: stderr: "" Feb 20 10:47:39.349: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 10:47:39.349: INFO: validating pod update-demo-nautilus-wjs5b Feb 20 10:47:39.362: INFO: got data: { "image": "nautilus.jpg" } Feb 20 10:47:39.362: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 10:47:39.362: INFO: update-demo-nautilus-wjs5b is verified up and running STEP: rolling-update to new replication controller Feb 20 10:47:39.371: INFO: scanned /root for discovery docs: Feb 20 10:47:39.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:48:14.732: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 20 10:48:14.732: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 10:48:14.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:48:14.919: INFO: stderr: "" Feb 20 10:48:14.919: INFO: stdout: "update-demo-kitten-hlwvv update-demo-kitten-p9zll " Feb 20 10:48:14.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hlwvv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:48:15.033: INFO: stderr: "" Feb 20 10:48:15.033: INFO: stdout: "true" Feb 20 10:48:15.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hlwvv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:48:15.118: INFO: stderr: "" Feb 20 10:48:15.118: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 20 10:48:15.118: INFO: validating pod update-demo-kitten-hlwvv Feb 20 10:48:15.156: INFO: got data: { "image": "kitten.jpg" } Feb 20 10:48:15.156: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 20 10:48:15.156: INFO: update-demo-kitten-hlwvv is verified up and running Feb 20 10:48:15.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-p9zll -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:48:15.299: INFO: stderr: "" Feb 20 10:48:15.299: INFO: stdout: "true" Feb 20 10:48:15.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-p9zll -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fqx5m' Feb 20 10:48:15.431: INFO: stderr: "" Feb 20 10:48:15.431: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 20 10:48:15.431: INFO: validating pod update-demo-kitten-p9zll Feb 20 10:48:15.443: INFO: got data: { "image": "kitten.jpg" } Feb 20 10:48:15.443: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 20 10:48:15.443: INFO: update-demo-kitten-p9zll is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:48:15.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fqx5m" for this suite. Feb 20 10:48:39.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:48:39.647: INFO: namespace: e2e-tests-kubectl-fqx5m, resource: bindings, ignored listing per whitelist Feb 20 10:48:40.008: INFO: namespace e2e-tests-kubectl-fqx5m deletion completed in 24.561766537s • [SLOW TEST:74.813 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:48:40.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 20 10:48:40.963: INFO: Waiting up to 5m0s for pod "var-expansion-8e9c0f12-53ce-11ea-bcb7-0242ac110008" in namespace "e2e-tests-var-expansion-kmvr4" to be "success or failure" Feb 20 10:48:40.983: INFO: Pod "var-expansion-8e9c0f12-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.891892ms Feb 20 10:48:42.994: INFO: Pod "var-expansion-8e9c0f12-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031408847s Feb 20 10:48:45.015: INFO: Pod "var-expansion-8e9c0f12-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051529336s Feb 20 10:48:47.029: INFO: Pod "var-expansion-8e9c0f12-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066338379s Feb 20 10:48:49.063: INFO: Pod "var-expansion-8e9c0f12-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100495545s Feb 20 10:48:51.367: INFO: Pod "var-expansion-8e9c0f12-53ce-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.404138515s STEP: Saw pod success Feb 20 10:48:51.367: INFO: Pod "var-expansion-8e9c0f12-53ce-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 10:48:51.381: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-8e9c0f12-53ce-11ea-bcb7-0242ac110008 container dapi-container: STEP: delete the pod Feb 20 10:48:51.957: INFO: Waiting for pod var-expansion-8e9c0f12-53ce-11ea-bcb7-0242ac110008 to disappear Feb 20 10:48:51.982: INFO: Pod var-expansion-8e9c0f12-53ce-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:48:51.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-kmvr4" for this suite. Feb 20 10:48:58.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:48:58.263: INFO: namespace: e2e-tests-var-expansion-kmvr4, resource: bindings, ignored listing per whitelist Feb 20 10:48:58.415: INFO: namespace e2e-tests-var-expansion-kmvr4 deletion completed in 6.282390637s • [SLOW TEST:18.406 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:48:58.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-4mdwz Feb 20 10:49:06.777: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-4mdwz STEP: checking the pod's current state and verifying that restartCount is present Feb 20 10:49:06.784: INFO: Initial restart count of pod liveness-exec is 0 Feb 20 10:50:01.450: INFO: Restart count of pod e2e-tests-container-probe-4mdwz/liveness-exec is now 1 (54.665771311s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:50:01.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-4mdwz" for this suite. Feb 20 10:50:09.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:50:09.925: INFO: namespace: e2e-tests-container-probe-4mdwz, resource: bindings, ignored listing per whitelist Feb 20 10:50:09.925: INFO: namespace e2e-tests-container-probe-4mdwz deletion completed in 8.380333375s • [SLOW TEST:71.510 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:50:09.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 10:50:10.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Feb 20 10:50:10.203: INFO: stderr: "" Feb 20 10:50:10.203: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Feb 20 10:50:10.207: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:50:10.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-m4cvt" for this suite. Feb 20 10:50:16.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:50:16.352: INFO: namespace: e2e-tests-kubectl-m4cvt, resource: bindings, ignored listing per whitelist Feb 20 10:50:16.535: INFO: namespace e2e-tests-kubectl-m4cvt deletion completed in 6.299133058s S [SKIPPING] [6.610 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 10:50:10.207: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:50:16.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-c7bfd6b8-53ce-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 10:50:16.805: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c7c117b5-53ce-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-q77mh" to be "success or failure" Feb 20 10:50:16.830: INFO: Pod "pod-projected-secrets-c7c117b5-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 24.736955ms Feb 20 10:50:18.855: INFO: Pod "pod-projected-secrets-c7c117b5-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049688121s Feb 20 10:50:20.888: INFO: Pod "pod-projected-secrets-c7c117b5-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082281316s Feb 20 10:50:23.105: INFO: Pod "pod-projected-secrets-c7c117b5-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299440615s Feb 20 10:50:25.121: INFO: Pod "pod-projected-secrets-c7c117b5-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.315732559s Feb 20 10:50:27.875: INFO: Pod "pod-projected-secrets-c7c117b5-53ce-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.069482333s STEP: Saw pod success Feb 20 10:50:27.875: INFO: Pod "pod-projected-secrets-c7c117b5-53ce-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 10:50:27.895: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-c7c117b5-53ce-11ea-bcb7-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 20 10:50:28.171: INFO: Waiting for pod pod-projected-secrets-c7c117b5-53ce-11ea-bcb7-0242ac110008 to disappear Feb 20 10:50:28.202: INFO: Pod pod-projected-secrets-c7c117b5-53ce-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:50:28.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q77mh" for this suite. Feb 20 10:50:34.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:50:34.461: INFO: namespace: e2e-tests-projected-q77mh, resource: bindings, ignored listing per whitelist Feb 20 10:50:34.615: INFO: namespace e2e-tests-projected-q77mh deletion completed in 6.339155173s • [SLOW TEST:18.080 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:50:34.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-zlths [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-zlths STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-zlths STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-zlths STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-zlths Feb 20 10:50:47.052: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-zlths, name: ss-0, uid: d9a31e64-53ce-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Feb 20 10:50:47.276: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-zlths, name: ss-0, uid: d9a31e64-53ce-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 20 10:50:47.452: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-zlths, name: ss-0, uid: d9a31e64-53ce-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 20 10:50:47.474: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-zlths STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-zlths STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-zlths and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 20 10:50:58.578: INFO: Deleting all statefulset in ns e2e-tests-statefulset-zlths Feb 20 10:50:58.592: INFO: Scaling statefulset ss to 0 Feb 20 10:51:18.718: INFO: Waiting for statefulset status.replicas updated to 0 Feb 20 10:51:18.730: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:51:18.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-zlths" for this suite. Feb 20 10:51:26.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:51:27.005: INFO: namespace: e2e-tests-statefulset-zlths, resource: bindings, ignored listing per whitelist Feb 20 10:51:27.082: INFO: namespace e2e-tests-statefulset-zlths deletion completed in 8.183503694s • [SLOW TEST:52.466 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:51:27.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 10:51:27.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1cfba5e-53ce-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-2c46r" to be "success or failure" Feb 20 10:51:27.505: INFO: Pod "downwardapi-volume-f1cfba5e-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 36.658368ms Feb 20 10:51:30.054: INFO: Pod "downwardapi-volume-f1cfba5e-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.585431186s Feb 20 10:51:32.067: INFO: Pod "downwardapi-volume-f1cfba5e-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.598466983s Feb 20 10:51:34.704: INFO: Pod "downwardapi-volume-f1cfba5e-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.235765456s Feb 20 10:51:36.734: INFO: Pod "downwardapi-volume-f1cfba5e-53ce-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.265549442s Feb 20 10:51:38.780: INFO: Pod "downwardapi-volume-f1cfba5e-53ce-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.311953003s STEP: Saw pod success Feb 20 10:51:38.781: INFO: Pod "downwardapi-volume-f1cfba5e-53ce-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 10:51:38.807: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f1cfba5e-53ce-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 10:51:39.066: INFO: Waiting for pod downwardapi-volume-f1cfba5e-53ce-11ea-bcb7-0242ac110008 to disappear Feb 20 10:51:39.095: INFO: Pod downwardapi-volume-f1cfba5e-53ce-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:51:39.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2c46r" for this suite. Feb 20 10:51:45.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:51:45.476: INFO: namespace: e2e-tests-projected-2c46r, resource: bindings, ignored listing per whitelist Feb 20 10:51:45.560: INFO: namespace e2e-tests-projected-2c46r deletion completed in 6.460376038s • [SLOW TEST:18.478 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:51:45.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6q4wh STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 20 10:51:45.839: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 20 10:52:22.076: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-6q4wh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 10:52:22.076: INFO: >>> kubeConfig: /root/.kube/config I0220 10:52:22.161448 8 log.go:172] (0xc000bf8420) (0xc0014d3180) Create stream I0220 10:52:22.161517 8 log.go:172] (0xc000bf8420) (0xc0014d3180) Stream added, broadcasting: 1 I0220 10:52:22.167138 8 log.go:172] (0xc000bf8420) Reply frame received for 1 I0220 10:52:22.167174 8 log.go:172] (0xc000bf8420) (0xc0011dfae0) Create stream I0220 10:52:22.167187 8 log.go:172] (0xc000bf8420) (0xc0011dfae0) Stream added, broadcasting: 3 I0220 10:52:22.168532 8 log.go:172] (0xc000bf8420) Reply frame received for 3 I0220 10:52:22.168557 8 log.go:172] (0xc000bf8420) (0xc0011dfb80) Create stream I0220 10:52:22.168567 8 log.go:172] (0xc000bf8420) (0xc0011dfb80) Stream added, broadcasting: 5 I0220 10:52:22.169826 8 log.go:172] (0xc000bf8420) Reply frame received for 5 I0220 10:52:23.372652 8 log.go:172] (0xc000bf8420) Data frame received for 3 I0220 10:52:23.372774 8 log.go:172] (0xc0011dfae0) (3) Data frame handling I0220 10:52:23.372796 8 log.go:172] (0xc0011dfae0) (3) Data frame sent I0220 10:52:23.604206 8 log.go:172] (0xc000bf8420) (0xc0011dfae0) Stream removed, broadcasting: 3 I0220 10:52:23.604300 8 log.go:172] (0xc000bf8420) Data frame received for 1 I0220 10:52:23.604328 8 log.go:172] (0xc0014d3180) (1) Data frame handling I0220 10:52:23.604369 8 log.go:172] (0xc0014d3180) (1) Data frame sent I0220 10:52:23.604402 8 log.go:172] (0xc000bf8420) (0xc0014d3180) Stream removed, broadcasting: 1 I0220 10:52:23.604479 8 log.go:172] (0xc000bf8420) (0xc0011dfb80) Stream removed, broadcasting: 5 I0220 10:52:23.604655 8 log.go:172] (0xc000bf8420) (0xc0014d3180) Stream removed, broadcasting: 1 I0220 10:52:23.604680 8 log.go:172] (0xc000bf8420) (0xc0011dfae0) Stream removed, broadcasting: 3 I0220 10:52:23.604692 8 log.go:172] (0xc000bf8420) (0xc0011dfb80) Stream removed, broadcasting: 5 I0220 10:52:23.605023 8 log.go:172] (0xc000bf8420) Go away received Feb 20 10:52:23.605: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:52:23.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-6q4wh" for this suite. Feb 20 10:52:47.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:52:47.954: INFO: namespace: e2e-tests-pod-network-test-6q4wh, resource: bindings, ignored listing per whitelist Feb 20 10:52:47.982: INFO: namespace e2e-tests-pod-network-test-6q4wh deletion completed in 24.350429038s • [SLOW TEST:62.421 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:52:47.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Feb 20 10:52:48.754: INFO: created pod pod-service-account-defaultsa Feb 20 10:52:48.754: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 20 10:52:48.797: INFO: created pod pod-service-account-mountsa Feb 20 10:52:48.797: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 20 10:52:48.815: INFO: created pod pod-service-account-nomountsa Feb 20 10:52:48.815: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 20 10:52:48.904: INFO: created pod pod-service-account-defaultsa-mountspec Feb 20 10:52:48.904: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 20 10:52:48.948: INFO: created pod pod-service-account-mountsa-mountspec Feb 20 10:52:48.948: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 20 10:52:48.972: INFO: created pod pod-service-account-nomountsa-mountspec Feb 20 10:52:48.972: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 20 10:52:49.109: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 20 10:52:49.109: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 20 10:52:49.154: INFO: created pod pod-service-account-mountsa-nomountspec Feb 20 10:52:49.154: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 20 10:52:49.291: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 20 10:52:49.291: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:52:49.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-5z4ws" for this suite. Feb 20 10:53:17.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:53:17.577: INFO: namespace: e2e-tests-svcaccounts-5z4ws, resource: bindings, ignored listing per whitelist Feb 20 10:53:17.866: INFO: namespace e2e-tests-svcaccounts-5z4ws deletion completed in 28.548864725s • [SLOW TEST:29.885 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:53:17.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9lscx STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 20 10:53:18.049: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 20 10:53:50.490: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9lscx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 10:53:50.490: INFO: >>> kubeConfig: /root/.kube/config I0220 10:53:50.694942 8 log.go:172] (0xc000bf82c0) (0xc001dde960) Create stream I0220 10:53:50.694999 8 log.go:172] (0xc000bf82c0) (0xc001dde960) Stream added, broadcasting: 1 I0220 10:53:50.713926 8 log.go:172] (0xc000bf82c0) Reply frame received for 1 I0220 10:53:50.713963 8 log.go:172] (0xc000bf82c0) (0xc001ddea00) Create stream I0220 10:53:50.713977 8 log.go:172] (0xc000bf82c0) (0xc001ddea00) Stream added, broadcasting: 3 I0220 10:53:50.715610 8 log.go:172] (0xc000bf82c0) Reply frame received for 3 I0220 10:53:50.715670 8 log.go:172] (0xc000bf82c0) (0xc0009ce500) Create stream I0220 10:53:50.715693 8 log.go:172] (0xc000bf82c0) (0xc0009ce500) Stream added, broadcasting: 5 I0220 10:53:50.718611 8 log.go:172] (0xc000bf82c0) Reply frame received for 5 I0220 10:53:51.027901 8 log.go:172] (0xc000bf82c0) Data frame received for 3 I0220 10:53:51.027969 8 log.go:172] (0xc001ddea00) (3) Data frame handling I0220 10:53:51.027982 8 log.go:172] (0xc001ddea00) (3) Data frame sent I0220 10:53:51.160437 8 log.go:172] (0xc000bf82c0) Data frame received for 1 I0220 10:53:51.160537 8 log.go:172] (0xc000bf82c0) (0xc001ddea00) Stream removed, broadcasting: 3 I0220 10:53:51.160576 8 log.go:172] (0xc001dde960) (1) Data frame handling I0220 10:53:51.160593 8 log.go:172] (0xc001dde960) (1) Data frame sent I0220 10:53:51.160635 8 log.go:172] (0xc000bf82c0) (0xc0009ce500) Stream removed, broadcasting: 5 I0220 10:53:51.160675 8 log.go:172] (0xc000bf82c0) (0xc001dde960) Stream removed, broadcasting: 1 I0220 10:53:51.160705 8 log.go:172] (0xc000bf82c0) Go away received I0220 10:53:51.160915 8 log.go:172] (0xc000bf82c0) (0xc001dde960) Stream removed, broadcasting: 1 I0220 10:53:51.160935 8 log.go:172] (0xc000bf82c0) (0xc001ddea00) Stream removed, broadcasting: 3 I0220 10:53:51.160963 8 log.go:172] (0xc000bf82c0) (0xc0009ce500) Stream removed, broadcasting: 5 Feb 20 10:53:51.160: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:53:51.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9lscx" for this suite. Feb 20 10:54:15.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:54:15.474: INFO: namespace: e2e-tests-pod-network-test-9lscx, resource: bindings, ignored listing per whitelist Feb 20 10:54:15.518: INFO: namespace e2e-tests-pod-network-test-9lscx deletion completed in 24.334758558s • [SLOW TEST:57.651 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:54:15.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 20 10:54:23.841: INFO: 10 pods remaining Feb 20 10:54:23.841: INFO: 10 pods has nil DeletionTimestamp Feb 20 10:54:23.841: INFO: Feb 20 10:54:25.829: INFO: 9 pods remaining Feb 20 10:54:25.829: INFO: 0 pods has nil DeletionTimestamp Feb 20 10:54:25.829: INFO: Feb 20 10:54:27.114: INFO: 0 pods remaining Feb 20 10:54:27.114: INFO: 0 pods has nil DeletionTimestamp Feb 20 10:54:27.114: INFO: STEP: Gathering metrics W0220 10:54:28.585004 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 10:54:28.585: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:54:28.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-tcjgx" for this suite. Feb 20 10:54:42.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:54:42.864: INFO: namespace: e2e-tests-gc-tcjgx, resource: bindings, ignored listing per whitelist Feb 20 10:54:42.928: INFO: namespace e2e-tests-gc-tcjgx deletion completed in 14.3213258s • [SLOW TEST:27.409 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:54:42.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0220 10:54:54.743388 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 10:54:54.743: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:54:54.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-lgjd5" for this suite. Feb 20 10:55:00.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:55:00.986: INFO: namespace: e2e-tests-gc-lgjd5, resource: bindings, ignored listing per whitelist Feb 20 10:55:00.990: INFO: namespace e2e-tests-gc-lgjd5 deletion completed in 6.219531921s • [SLOW TEST:18.062 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:55:00.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 20 10:55:11.890: INFO: Successfully updated pod "annotationupdate713c4bb9-53cf-11ea-bcb7-0242ac110008" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:55:14.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rm2sr" for this suite. Feb 20 10:55:32.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:55:32.409: INFO: namespace: e2e-tests-downward-api-rm2sr, resource: bindings, ignored listing per whitelist Feb 20 10:55:32.430: INFO: namespace e2e-tests-downward-api-rm2sr deletion completed in 18.371371657s • [SLOW TEST:31.439 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:55:32.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 20 10:55:32.920: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bjcd7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bjcd7/configmaps/e2e-watch-test-configmap-a,UID:842e7437-53cf-11ea-a994-fa163e34d433,ResourceVersion:22299668,Generation:0,CreationTimestamp:2020-02-20 10:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 20 10:55:32.920: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bjcd7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bjcd7/configmaps/e2e-watch-test-configmap-a,UID:842e7437-53cf-11ea-a994-fa163e34d433,ResourceVersion:22299668,Generation:0,CreationTimestamp:2020-02-20 10:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 20 10:55:42.998: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bjcd7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bjcd7/configmaps/e2e-watch-test-configmap-a,UID:842e7437-53cf-11ea-a994-fa163e34d433,ResourceVersion:22299680,Generation:0,CreationTimestamp:2020-02-20 10:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 20 10:55:42.999: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bjcd7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bjcd7/configmaps/e2e-watch-test-configmap-a,UID:842e7437-53cf-11ea-a994-fa163e34d433,ResourceVersion:22299680,Generation:0,CreationTimestamp:2020-02-20 10:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 20 10:55:53.037: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bjcd7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bjcd7/configmaps/e2e-watch-test-configmap-a,UID:842e7437-53cf-11ea-a994-fa163e34d433,ResourceVersion:22299692,Generation:0,CreationTimestamp:2020-02-20 10:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 20 10:55:53.037: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bjcd7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bjcd7/configmaps/e2e-watch-test-configmap-a,UID:842e7437-53cf-11ea-a994-fa163e34d433,ResourceVersion:22299692,Generation:0,CreationTimestamp:2020-02-20 10:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 20 10:56:03.060: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bjcd7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bjcd7/configmaps/e2e-watch-test-configmap-a,UID:842e7437-53cf-11ea-a994-fa163e34d433,ResourceVersion:22299705,Generation:0,CreationTimestamp:2020-02-20 10:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 20 10:56:03.060: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-bjcd7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bjcd7/configmaps/e2e-watch-test-configmap-a,UID:842e7437-53cf-11ea-a994-fa163e34d433,ResourceVersion:22299705,Generation:0,CreationTimestamp:2020-02-20 10:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 20 10:56:13.089: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-bjcd7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bjcd7/configmaps/e2e-watch-test-configmap-b,UID:9c1d36d0-53cf-11ea-a994-fa163e34d433,ResourceVersion:22299718,Generation:0,CreationTimestamp:2020-02-20 10:56:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 20 10:56:13.089: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-bjcd7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bjcd7/configmaps/e2e-watch-test-configmap-b,UID:9c1d36d0-53cf-11ea-a994-fa163e34d433,ResourceVersion:22299718,Generation:0,CreationTimestamp:2020-02-20 10:56:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 20 10:56:23.111: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-bjcd7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bjcd7/configmaps/e2e-watch-test-configmap-b,UID:9c1d36d0-53cf-11ea-a994-fa163e34d433,ResourceVersion:22299731,Generation:0,CreationTimestamp:2020-02-20 10:56:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 20 10:56:23.111: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-bjcd7,SelfLink:/api/v1/namespaces/e2e-tests-watch-bjcd7/configmaps/e2e-watch-test-configmap-b,UID:9c1d36d0-53cf-11ea-a994-fa163e34d433,ResourceVersion:22299731,Generation:0,CreationTimestamp:2020-02-20 10:56:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:56:33.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-bjcd7" for this suite. Feb 20 10:56:39.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:56:39.323: INFO: namespace: e2e-tests-watch-bjcd7, resource: bindings, ignored listing per whitelist Feb 20 10:56:39.338: INFO: namespace e2e-tests-watch-bjcd7 deletion completed in 6.213380979s • [SLOW TEST:66.907 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:56:39.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-dxrt STEP: Creating a pod to test atomic-volume-subpath Feb 20 10:56:39.757: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dxrt" in namespace "e2e-tests-subpath-q97cg" to be "success or failure" Feb 20 10:56:39.981: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 223.684121ms Feb 20 10:56:41.996: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238445726s Feb 20 10:56:44.012: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254157687s Feb 20 10:56:46.020: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2631216s Feb 20 10:56:48.040: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28216544s Feb 20 10:56:50.054: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.297007557s Feb 20 10:56:52.091: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.333462214s Feb 20 10:56:54.104: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.346672914s Feb 20 10:56:56.123: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Running", Reason="", readiness=false. Elapsed: 16.36578564s Feb 20 10:56:58.138: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Running", Reason="", readiness=false. Elapsed: 18.380811861s Feb 20 10:57:00.153: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Running", Reason="", readiness=false. Elapsed: 20.395339365s Feb 20 10:57:02.174: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Running", Reason="", readiness=false. Elapsed: 22.41686392s Feb 20 10:57:04.191: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Running", Reason="", readiness=false. Elapsed: 24.433924245s Feb 20 10:57:06.207: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Running", Reason="", readiness=false. Elapsed: 26.449772603s Feb 20 10:57:08.226: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Running", Reason="", readiness=false. Elapsed: 28.468184207s Feb 20 10:57:10.242: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Running", Reason="", readiness=false. Elapsed: 30.484742975s Feb 20 10:57:12.265: INFO: Pod "pod-subpath-test-secret-dxrt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.507787538s STEP: Saw pod success Feb 20 10:57:12.265: INFO: Pod "pod-subpath-test-secret-dxrt" satisfied condition "success or failure" Feb 20 10:57:12.270: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-dxrt container test-container-subpath-secret-dxrt: STEP: delete the pod Feb 20 10:57:12.582: INFO: Waiting for pod pod-subpath-test-secret-dxrt to disappear Feb 20 10:57:12.623: INFO: Pod pod-subpath-test-secret-dxrt no longer exists STEP: Deleting pod pod-subpath-test-secret-dxrt Feb 20 10:57:12.623: INFO: Deleting pod "pod-subpath-test-secret-dxrt" in namespace "e2e-tests-subpath-q97cg" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:57:12.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-q97cg" for this suite. Feb 20 10:57:18.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:57:18.775: INFO: namespace: e2e-tests-subpath-q97cg, resource: bindings, ignored listing per whitelist Feb 20 10:57:19.033: INFO: namespace e2e-tests-subpath-q97cg deletion completed in 6.38531854s • [SLOW TEST:39.695 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:57:19.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 20 10:57:29.989: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c38fcfa7-53cf-11ea-bcb7-0242ac110008" Feb 20 10:57:29.989: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c38fcfa7-53cf-11ea-bcb7-0242ac110008" in namespace "e2e-tests-pods-h4kdp" to be "terminated due to deadline exceeded" Feb 20 10:57:30.053: INFO: Pod "pod-update-activedeadlineseconds-c38fcfa7-53cf-11ea-bcb7-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 63.853245ms Feb 20 10:57:32.294: INFO: Pod "pod-update-activedeadlineseconds-c38fcfa7-53cf-11ea-bcb7-0242ac110008": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.305175312s Feb 20 10:57:32.294: INFO: Pod "pod-update-activedeadlineseconds-c38fcfa7-53cf-11ea-bcb7-0242ac110008" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:57:32.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-h4kdp" for this suite. Feb 20 10:57:38.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:57:38.765: INFO: namespace: e2e-tests-pods-h4kdp, resource: bindings, ignored listing per whitelist Feb 20 10:57:38.797: INFO: namespace e2e-tests-pods-h4kdp deletion completed in 6.493028018s • [SLOW TEST:19.764 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:57:38.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-cf5c4776-53cf-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 10:57:39.097: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf6023f8-53cf-11ea-bcb7-0242ac110008" in namespace "e2e-tests-configmap-6qpbr" to be "success or failure" Feb 20 10:57:39.222: INFO: Pod "pod-configmaps-cf6023f8-53cf-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 125.161786ms Feb 20 10:57:41.247: INFO: Pod "pod-configmaps-cf6023f8-53cf-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149592371s Feb 20 10:57:43.260: INFO: Pod "pod-configmaps-cf6023f8-53cf-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163069979s Feb 20 10:57:45.357: INFO: Pod "pod-configmaps-cf6023f8-53cf-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.259786091s Feb 20 10:57:47.396: INFO: Pod "pod-configmaps-cf6023f8-53cf-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.298804361s STEP: Saw pod success Feb 20 10:57:47.396: INFO: Pod "pod-configmaps-cf6023f8-53cf-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 10:57:47.418: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-cf6023f8-53cf-11ea-bcb7-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 20 10:57:47.568: INFO: Waiting for pod pod-configmaps-cf6023f8-53cf-11ea-bcb7-0242ac110008 to disappear Feb 20 10:57:47.582: INFO: Pod pod-configmaps-cf6023f8-53cf-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:57:47.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6qpbr" for this suite. Feb 20 10:57:53.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:57:53.901: INFO: namespace: e2e-tests-configmap-6qpbr, resource: bindings, ignored listing per whitelist Feb 20 10:57:53.909: INFO: namespace e2e-tests-configmap-6qpbr deletion completed in 6.31624328s • [SLOW TEST:15.111 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:57:53.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 20 10:57:54.192: INFO: Waiting up to 5m0s for pod "downward-api-d85f48d3-53cf-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-86xbv" to be "success or failure" Feb 20 10:57:54.281: INFO: Pod "downward-api-d85f48d3-53cf-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 88.954029ms Feb 20 10:57:56.303: INFO: Pod "downward-api-d85f48d3-53cf-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111316771s Feb 20 10:57:58.333: INFO: Pod "downward-api-d85f48d3-53cf-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141362647s Feb 20 10:58:00.350: INFO: Pod "downward-api-d85f48d3-53cf-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158382336s Feb 20 10:58:02.359: INFO: Pod "downward-api-d85f48d3-53cf-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167591275s Feb 20 10:58:04.373: INFO: Pod "downward-api-d85f48d3-53cf-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181056311s STEP: Saw pod success Feb 20 10:58:04.373: INFO: Pod "downward-api-d85f48d3-53cf-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 10:58:04.377: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-d85f48d3-53cf-11ea-bcb7-0242ac110008 container dapi-container: STEP: delete the pod Feb 20 10:58:04.457: INFO: Waiting for pod downward-api-d85f48d3-53cf-11ea-bcb7-0242ac110008 to disappear Feb 20 10:58:04.496: INFO: Pod downward-api-d85f48d3-53cf-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:58:04.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-86xbv" for this suite. Feb 20 10:58:10.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:58:10.683: INFO: namespace: e2e-tests-downward-api-86xbv, resource: bindings, ignored listing per whitelist Feb 20 10:58:10.738: INFO: namespace e2e-tests-downward-api-86xbv deletion completed in 6.229193719s • [SLOW TEST:16.829 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:58:10.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 10:58:10.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-26hsg' Feb 20 10:58:13.017: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 20 10:58:13.017: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Feb 20 10:58:13.042: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Feb 20 10:58:13.173: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 20 10:58:13.207: INFO: scanned /root for discovery docs: Feb 20 10:58:13.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-26hsg' Feb 20 10:58:37.491: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 20 10:58:37.491: INFO: stdout: "Created e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135\nScaling up e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 20 10:58:37.491: INFO: stdout: "Created e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135\nScaling up e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 20 10:58:37.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-26hsg' Feb 20 10:58:37.686: INFO: stderr: "" Feb 20 10:58:37.686: INFO: stdout: "e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135-hhqc6 e2e-test-nginx-rc-drcqt " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 20 10:58:42.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-26hsg' Feb 20 10:58:42.900: INFO: stderr: "" Feb 20 10:58:42.900: INFO: stdout: "e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135-hhqc6 " Feb 20 10:58:42.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135-hhqc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-26hsg' Feb 20 10:58:43.076: INFO: stderr: "" Feb 20 10:58:43.076: INFO: stdout: "true" Feb 20 10:58:43.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135-hhqc6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-26hsg' Feb 20 10:58:43.235: INFO: stderr: "" Feb 20 10:58:43.235: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 20 10:58:43.235: INFO: e2e-test-nginx-rc-abdd988b583dd7925f147b0432af2135-hhqc6 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Feb 20 10:58:43.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-26hsg' Feb 20 10:58:43.378: INFO: stderr: "" Feb 20 10:58:43.378: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:58:43.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-26hsg" for this suite. Feb 20 10:58:51.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:58:51.551: INFO: namespace: e2e-tests-kubectl-26hsg, resource: bindings, ignored listing per whitelist Feb 20 10:58:51.628: INFO: namespace e2e-tests-kubectl-26hsg deletion completed in 8.242261846s • [SLOW TEST:40.890 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:58:51.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0220 10:59:22.521584 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 10:59:22.521: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:59:22.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-xgbcj" for this suite. Feb 20 10:59:30.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 10:59:31.179: INFO: namespace: e2e-tests-gc-xgbcj, resource: bindings, ignored listing per whitelist Feb 20 10:59:31.330: INFO: namespace e2e-tests-gc-xgbcj deletion completed in 8.795108496s • [SLOW TEST:39.701 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 10:59:31.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 20 10:59:31.647: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 10:59:53.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-wmq6q" for this suite. Feb 20 11:00:01.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:00:01.294: INFO: namespace: e2e-tests-init-container-wmq6q, resource: bindings, ignored listing per whitelist Feb 20 11:00:01.383: INFO: namespace e2e-tests-init-container-wmq6q deletion completed in 8.266767623s • [SLOW TEST:30.053 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:00:01.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-25knn STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-25knn to expose endpoints map[] Feb 20 11:00:01.705: INFO: Get endpoints failed (7.757625ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 20 11:00:02.734: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-25knn exposes endpoints map[] (1.037213886s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-25knn STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-25knn to expose endpoints map[pod1:[100]] Feb 20 11:00:06.935: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.157101014s elapsed, will retry) Feb 20 11:00:11.167: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-25knn exposes endpoints map[pod1:[100]] (8.388516907s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-25knn STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-25knn to expose endpoints map[pod1:[100] pod2:[101]] Feb 20 11:00:15.351: INFO: Unexpected endpoints: found map[25039da3-53d0-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.17517695s elapsed, will retry) Feb 20 11:00:20.751: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-25knn exposes endpoints map[pod1:[100] pod2:[101]] (9.575635269s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-25knn STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-25knn to expose endpoints map[pod2:[101]] Feb 20 11:00:21.871: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-25knn exposes endpoints map[pod2:[101]] (1.111256913s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-25knn STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-25knn to expose endpoints map[] Feb 20 11:00:22.965: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-25knn exposes endpoints map[] (1.06289012s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:00:23.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-25knn" for this suite. Feb 20 11:00:47.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:00:47.508: INFO: namespace: e2e-tests-services-25knn, resource: bindings, ignored listing per whitelist Feb 20 11:00:48.263: INFO: namespace e2e-tests-services-25knn deletion completed in 25.082773229s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:46.880 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:00:48.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 11:00:48.667: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 20 11:00:53.683: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 20 11:00:59.700: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 20 11:00:59.752: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-d8jnr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d8jnr/deployments/test-cleanup-deployment,UID:46f74afc-53d0-11ea-a994-fa163e34d433,ResourceVersion:22300421,Generation:1,CreationTimestamp:2020-02-20 11:00:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 20 11:00:59.791: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:00:59.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-d8jnr" for this suite. Feb 20 11:01:07.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:01:08.044: INFO: namespace: e2e-tests-deployment-d8jnr, resource: bindings, ignored listing per whitelist Feb 20 11:01:08.124: INFO: namespace e2e-tests-deployment-d8jnr deletion completed in 8.197480976s • [SLOW TEST:19.861 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:01:08.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 20 11:04:09.844: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:09.999: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:12.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:12.019: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:14.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:14.023: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:16.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:16.013: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:18.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:18.090: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:20.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:20.010: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:22.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:22.171: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:24.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:24.134: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:26.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:26.009: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:28.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:28.012: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:30.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:30.018: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:32.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:32.017: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:34.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:34.017: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:36.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:36.016: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:38.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:38.019: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:40.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:40.015: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:42.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:42.035: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:44.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:44.016: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:46.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:46.019: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:48.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:48.022: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:50.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:50.056: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:52.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:52.009: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:54.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:54.023: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:56.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:56.049: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:04:58.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:04:58.015: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:00.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:00.012: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:02.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:02.028: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:04.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:04.034: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:06.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:06.022: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:08.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:08.012: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:10.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:10.414: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:12.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:12.010: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:14.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:14.021: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:16.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:16.013: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:18.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:18.012: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:20.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:20.106: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:22.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:22.018: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:24.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:24.025: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:26.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:26.017: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:28.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:28.023: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:30.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:30.014: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:32.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:32.030: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:34.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:34.013: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:36.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:36.026: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:38.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:38.017: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:40.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:40.027: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:42.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:42.014: INFO: Pod pod-with-poststart-exec-hook still exists Feb 20 11:05:44.000: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 20 11:05:44.071: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:05:44.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-j9l8w" for this suite. Feb 20 11:06:10.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:06:10.226: INFO: namespace: e2e-tests-container-lifecycle-hook-j9l8w, resource: bindings, ignored listing per whitelist Feb 20 11:06:10.288: INFO: namespace e2e-tests-container-lifecycle-hook-j9l8w deletion completed in 26.196436662s • [SLOW TEST:302.163 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:06:10.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 20 11:06:10.553: INFO: Waiting up to 5m0s for pod "downward-api-0036d8ad-53d1-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-qlpnn" to be "success or failure" Feb 20 11:06:10.568: INFO: Pod "downward-api-0036d8ad-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.340527ms Feb 20 11:06:12.912: INFO: Pod "downward-api-0036d8ad-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358711057s Feb 20 11:06:14.928: INFO: Pod "downward-api-0036d8ad-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374951733s Feb 20 11:06:17.164: INFO: Pod "downward-api-0036d8ad-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.610592739s Feb 20 11:06:19.179: INFO: Pod "downward-api-0036d8ad-53d1-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.625639993s STEP: Saw pod success Feb 20 11:06:19.179: INFO: Pod "downward-api-0036d8ad-53d1-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:06:19.188: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-0036d8ad-53d1-11ea-bcb7-0242ac110008 container dapi-container: STEP: delete the pod Feb 20 11:06:19.290: INFO: Waiting for pod downward-api-0036d8ad-53d1-11ea-bcb7-0242ac110008 to disappear Feb 20 11:06:19.298: INFO: Pod downward-api-0036d8ad-53d1-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:06:19.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qlpnn" for this suite. Feb 20 11:06:25.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:06:25.589: INFO: namespace: e2e-tests-downward-api-qlpnn, resource: bindings, ignored listing per whitelist Feb 20 11:06:25.713: INFO: namespace e2e-tests-downward-api-qlpnn deletion completed in 6.405195641s • [SLOW TEST:15.425 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:06:25.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-098ee4c5-53d1-11ea-bcb7-0242ac110008 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-098ee4c5-53d1-11ea-bcb7-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:06:38.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tpwvc" for this suite. Feb 20 11:07:02.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:07:02.832: INFO: namespace: e2e-tests-configmap-tpwvc, resource: bindings, ignored listing per whitelist Feb 20 11:07:02.898: INFO: namespace e2e-tests-configmap-tpwvc deletion completed in 24.233973563s • [SLOW TEST:37.184 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:07:02.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-2phf9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2phf9 to expose endpoints map[] Feb 20 11:07:03.238: INFO: Get endpoints failed (13.521088ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 20 11:07:04.255: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2phf9 exposes endpoints map[] (1.030533842s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-2phf9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2phf9 to expose endpoints map[pod1:[80]] Feb 20 11:07:08.590: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.309918769s elapsed, will retry) Feb 20 11:07:13.199: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2phf9 exposes endpoints map[pod1:[80]] (8.919335116s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-2phf9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2phf9 to expose endpoints map[pod2:[80] pod1:[80]] Feb 20 11:07:17.688: INFO: Unexpected endpoints: found map[20426b5d-53d1-11ea-a994-fa163e34d433:[80]], expected map[pod2:[80] pod1:[80]] (4.462371691s elapsed, will retry) Feb 20 11:07:20.768: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2phf9 exposes endpoints map[pod1:[80] pod2:[80]] (7.542361833s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-2phf9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2phf9 to expose endpoints map[pod2:[80]] Feb 20 11:07:21.849: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2phf9 exposes endpoints map[pod2:[80]] (1.071358124s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-2phf9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2phf9 to expose endpoints map[] Feb 20 11:07:22.982: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2phf9 exposes endpoints map[] (1.108612405s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:07:23.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-2phf9" for this suite. Feb 20 11:07:47.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:07:47.373: INFO: namespace: e2e-tests-services-2phf9, resource: bindings, ignored listing per whitelist Feb 20 11:07:47.413: INFO: namespace e2e-tests-services-2phf9 deletion completed in 24.257943418s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:44.516 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:07:47.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-srz8d/secret-test-3a1a2030-53d1-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 11:07:47.793: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a2f57db-53d1-11ea-bcb7-0242ac110008" in namespace "e2e-tests-secrets-srz8d" to be "success or failure" Feb 20 11:07:47.815: INFO: Pod "pod-configmaps-3a2f57db-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 22.107243ms Feb 20 11:07:50.144: INFO: Pod "pod-configmaps-3a2f57db-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351235343s Feb 20 11:07:52.168: INFO: Pod "pod-configmaps-3a2f57db-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374975628s Feb 20 11:07:54.535: INFO: Pod "pod-configmaps-3a2f57db-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.741666414s Feb 20 11:07:56.577: INFO: Pod "pod-configmaps-3a2f57db-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.784325279s Feb 20 11:07:58.655: INFO: Pod "pod-configmaps-3a2f57db-53d1-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.861708166s STEP: Saw pod success Feb 20 11:07:58.655: INFO: Pod "pod-configmaps-3a2f57db-53d1-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:07:58.690: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3a2f57db-53d1-11ea-bcb7-0242ac110008 container env-test: STEP: delete the pod Feb 20 11:07:58.845: INFO: Waiting for pod pod-configmaps-3a2f57db-53d1-11ea-bcb7-0242ac110008 to disappear Feb 20 11:07:58.911: INFO: Pod pod-configmaps-3a2f57db-53d1-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:07:58.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-srz8d" for this suite. Feb 20 11:08:04.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:08:04.988: INFO: namespace: e2e-tests-secrets-srz8d, resource: bindings, ignored listing per whitelist Feb 20 11:08:05.121: INFO: namespace e2e-tests-secrets-srz8d deletion completed in 6.199382682s • [SLOW TEST:17.708 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:08:05.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 20 11:08:05.343: INFO: Waiting up to 5m0s for pod "pod-44a72e34-53d1-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-t4zdn" to be "success or failure" Feb 20 11:08:05.362: INFO: Pod "pod-44a72e34-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.543088ms Feb 20 11:08:07.397: INFO: Pod "pod-44a72e34-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05380658s Feb 20 11:08:09.432: INFO: Pod "pod-44a72e34-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089057106s Feb 20 11:08:11.487: INFO: Pod "pod-44a72e34-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144253034s Feb 20 11:08:13.506: INFO: Pod "pod-44a72e34-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163384455s Feb 20 11:08:15.753: INFO: Pod "pod-44a72e34-53d1-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.409846425s STEP: Saw pod success Feb 20 11:08:15.753: INFO: Pod "pod-44a72e34-53d1-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:08:15.789: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-44a72e34-53d1-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 11:08:15.928: INFO: Waiting for pod pod-44a72e34-53d1-11ea-bcb7-0242ac110008 to disappear Feb 20 11:08:15.937: INFO: Pod pod-44a72e34-53d1-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:08:15.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-t4zdn" for this suite. Feb 20 11:08:21.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:08:21.984: INFO: namespace: e2e-tests-emptydir-t4zdn, resource: bindings, ignored listing per whitelist Feb 20 11:08:22.174: INFO: namespace e2e-tests-emptydir-t4zdn deletion completed in 6.229805146s • [SLOW TEST:17.053 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:08:22.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 20 11:08:22.422: INFO: Waiting up to 5m0s for pod "pod-4ed26c05-53d1-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-xhfwf" to be "success or failure" Feb 20 11:08:22.500: INFO: Pod "pod-4ed26c05-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 78.487877ms Feb 20 11:08:24.520: INFO: Pod "pod-4ed26c05-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098491143s Feb 20 11:08:26.560: INFO: Pod "pod-4ed26c05-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137985255s Feb 20 11:08:28.645: INFO: Pod "pod-4ed26c05-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223240334s Feb 20 11:08:30.704: INFO: Pod "pod-4ed26c05-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28225764s Feb 20 11:08:33.402: INFO: Pod "pod-4ed26c05-53d1-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.980492536s STEP: Saw pod success Feb 20 11:08:33.402: INFO: Pod "pod-4ed26c05-53d1-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:08:33.438: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4ed26c05-53d1-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 11:08:33.753: INFO: Waiting for pod pod-4ed26c05-53d1-11ea-bcb7-0242ac110008 to disappear Feb 20 11:08:33.764: INFO: Pod pod-4ed26c05-53d1-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:08:33.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xhfwf" for this suite. Feb 20 11:08:39.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:08:39.900: INFO: namespace: e2e-tests-emptydir-xhfwf, resource: bindings, ignored listing per whitelist Feb 20 11:08:39.947: INFO: namespace e2e-tests-emptydir-xhfwf deletion completed in 6.170660487s • [SLOW TEST:17.773 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:08:39.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-5971b0b2-53d1-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 11:08:40.246: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-59734ffe-53d1-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-lncrj" to be "success or failure" Feb 20 11:08:40.260: INFO: Pod "pod-projected-configmaps-59734ffe-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.38074ms Feb 20 11:08:42.276: INFO: Pod "pod-projected-configmaps-59734ffe-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030132037s Feb 20 11:08:44.359: INFO: Pod "pod-projected-configmaps-59734ffe-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112812345s Feb 20 11:08:46.528: INFO: Pod "pod-projected-configmaps-59734ffe-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.281870135s Feb 20 11:08:48.550: INFO: Pod "pod-projected-configmaps-59734ffe-53d1-11ea-bcb7-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.304348975s Feb 20 11:08:50.588: INFO: Pod "pod-projected-configmaps-59734ffe-53d1-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.341807669s STEP: Saw pod success Feb 20 11:08:50.588: INFO: Pod "pod-projected-configmaps-59734ffe-53d1-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:08:50.616: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-59734ffe-53d1-11ea-bcb7-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 20 11:08:52.128: INFO: Waiting for pod pod-projected-configmaps-59734ffe-53d1-11ea-bcb7-0242ac110008 to disappear Feb 20 11:08:52.220: INFO: Pod pod-projected-configmaps-59734ffe-53d1-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:08:52.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lncrj" for this suite. Feb 20 11:08:58.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:08:58.879: INFO: namespace: e2e-tests-projected-lncrj, resource: bindings, ignored listing per whitelist Feb 20 11:08:58.879: INFO: namespace e2e-tests-projected-lncrj deletion completed in 6.583789074s • [SLOW TEST:18.931 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:08:58.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 11:08:59.205: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64c21041-53d1-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-9znml" to be "success or failure" Feb 20 11:08:59.224: INFO: Pod "downwardapi-volume-64c21041-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.882062ms Feb 20 11:09:01.653: INFO: Pod "downwardapi-volume-64c21041-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.448205667s Feb 20 11:09:03.676: INFO: Pod "downwardapi-volume-64c21041-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471277014s Feb 20 11:09:05.691: INFO: Pod "downwardapi-volume-64c21041-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.48623712s Feb 20 11:09:07.703: INFO: Pod "downwardapi-volume-64c21041-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.498291007s Feb 20 11:09:09.713: INFO: Pod "downwardapi-volume-64c21041-53d1-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.507984354s STEP: Saw pod success Feb 20 11:09:09.713: INFO: Pod "downwardapi-volume-64c21041-53d1-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:09:09.717: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-64c21041-53d1-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 11:09:10.354: INFO: Waiting for pod downwardapi-volume-64c21041-53d1-11ea-bcb7-0242ac110008 to disappear Feb 20 11:09:10.555: INFO: Pod downwardapi-volume-64c21041-53d1-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:09:10.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9znml" for this suite. Feb 20 11:09:16.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:09:17.038: INFO: namespace: e2e-tests-downward-api-9znml, resource: bindings, ignored listing per whitelist Feb 20 11:09:17.084: INFO: namespace e2e-tests-downward-api-9znml deletion completed in 6.509461771s • [SLOW TEST:18.192 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:09:17.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 11:09:17.286: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f88e0a1-53d1-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-99klm" to be "success or failure" Feb 20 11:09:17.298: INFO: Pod "downwardapi-volume-6f88e0a1-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.039817ms Feb 20 11:09:19.313: INFO: Pod "downwardapi-volume-6f88e0a1-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026929402s Feb 20 11:09:21.326: INFO: Pod "downwardapi-volume-6f88e0a1-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039706922s Feb 20 11:09:23.339: INFO: Pod "downwardapi-volume-6f88e0a1-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052707182s Feb 20 11:09:25.377: INFO: Pod "downwardapi-volume-6f88e0a1-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090790641s Feb 20 11:09:27.809: INFO: Pod "downwardapi-volume-6f88e0a1-53d1-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.522902076s STEP: Saw pod success Feb 20 11:09:27.809: INFO: Pod "downwardapi-volume-6f88e0a1-53d1-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:09:27.819: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6f88e0a1-53d1-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 11:09:28.265: INFO: Waiting for pod downwardapi-volume-6f88e0a1-53d1-11ea-bcb7-0242ac110008 to disappear Feb 20 11:09:28.271: INFO: Pod downwardapi-volume-6f88e0a1-53d1-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:09:28.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-99klm" for this suite. Feb 20 11:09:34.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:09:34.390: INFO: namespace: e2e-tests-projected-99klm, resource: bindings, ignored listing per whitelist Feb 20 11:09:34.428: INFO: namespace e2e-tests-projected-99klm deletion completed in 6.148530127s • [SLOW TEST:17.344 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:09:34.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-k2m6j STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-k2m6j STEP: Deleting pre-stop pod Feb 20 11:09:57.956: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:09:57.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-k2m6j" for this suite. Feb 20 11:10:32.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:10:32.201: INFO: namespace: e2e-tests-prestop-k2m6j, resource: bindings, ignored listing per whitelist Feb 20 11:10:32.230: INFO: namespace e2e-tests-prestop-k2m6j deletion completed in 34.220103341s • [SLOW TEST:57.801 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:10:32.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-zzcq STEP: Creating a pod to test atomic-volume-subpath Feb 20 11:10:32.649: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-zzcq" in namespace "e2e-tests-subpath-2vfvm" to be "success or failure" Feb 20 11:10:32.684: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.983785ms Feb 20 11:10:35.431: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.782098839s Feb 20 11:10:37.461: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.811855778s Feb 20 11:10:39.478: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.828584721s Feb 20 11:10:41.492: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.842660965s Feb 20 11:10:43.504: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.855301282s Feb 20 11:10:45.519: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.870310816s Feb 20 11:10:47.584: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.934776243s Feb 20 11:10:49.593: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Running", Reason="", readiness=false. Elapsed: 16.943881742s Feb 20 11:10:51.604: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Running", Reason="", readiness=false. Elapsed: 18.954897188s Feb 20 11:10:53.643: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Running", Reason="", readiness=false. Elapsed: 20.994321955s Feb 20 11:10:55.669: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Running", Reason="", readiness=false. Elapsed: 23.020184364s Feb 20 11:10:57.687: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Running", Reason="", readiness=false. Elapsed: 25.037837785s Feb 20 11:10:59.714: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Running", Reason="", readiness=false. Elapsed: 27.065225865s Feb 20 11:11:01.731: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Running", Reason="", readiness=false. Elapsed: 29.082141786s Feb 20 11:11:03.762: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Running", Reason="", readiness=false. Elapsed: 31.112582832s Feb 20 11:11:05.899: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Running", Reason="", readiness=false. Elapsed: 33.249860316s Feb 20 11:11:08.074: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Running", Reason="", readiness=false. Elapsed: 35.424749827s Feb 20 11:11:10.415: INFO: Pod "pod-subpath-test-projected-zzcq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.76581463s STEP: Saw pod success Feb 20 11:11:10.415: INFO: Pod "pod-subpath-test-projected-zzcq" satisfied condition "success or failure" Feb 20 11:11:10.427: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-zzcq container test-container-subpath-projected-zzcq: STEP: delete the pod Feb 20 11:11:10.770: INFO: Waiting for pod pod-subpath-test-projected-zzcq to disappear Feb 20 11:11:10.787: INFO: Pod pod-subpath-test-projected-zzcq no longer exists STEP: Deleting pod pod-subpath-test-projected-zzcq Feb 20 11:11:10.787: INFO: Deleting pod "pod-subpath-test-projected-zzcq" in namespace "e2e-tests-subpath-2vfvm" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:11:10.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-2vfvm" for this suite. Feb 20 11:11:18.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:11:19.169: INFO: namespace: e2e-tests-subpath-2vfvm, resource: bindings, ignored listing per whitelist Feb 20 11:11:19.181: INFO: namespace e2e-tests-subpath-2vfvm deletion completed in 8.374001968s • [SLOW TEST:46.951 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:11:19.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 11:11:19.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b84fd675-53d1-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-kwdxx" to be "success or failure" Feb 20 11:11:19.401: INFO: Pod "downwardapi-volume-b84fd675-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.363685ms Feb 20 11:11:21.418: INFO: Pod "downwardapi-volume-b84fd675-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031768143s Feb 20 11:11:23.433: INFO: Pod "downwardapi-volume-b84fd675-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047124849s Feb 20 11:11:25.482: INFO: Pod "downwardapi-volume-b84fd675-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095785132s Feb 20 11:11:27.908: INFO: Pod "downwardapi-volume-b84fd675-53d1-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52140093s Feb 20 11:11:30.125: INFO: Pod "downwardapi-volume-b84fd675-53d1-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.738940479s STEP: Saw pod success Feb 20 11:11:30.125: INFO: Pod "downwardapi-volume-b84fd675-53d1-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:11:30.134: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b84fd675-53d1-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 11:11:30.403: INFO: Waiting for pod downwardapi-volume-b84fd675-53d1-11ea-bcb7-0242ac110008 to disappear Feb 20 11:11:30.417: INFO: Pod downwardapi-volume-b84fd675-53d1-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:11:30.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kwdxx" for this suite. Feb 20 11:11:36.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:11:36.714: INFO: namespace: e2e-tests-projected-kwdxx, resource: bindings, ignored listing per whitelist Feb 20 11:11:36.739: INFO: namespace e2e-tests-projected-kwdxx deletion completed in 6.29620681s • [SLOW TEST:17.558 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:11:36.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 20 11:11:55.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 11:11:55.084: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 11:11:57.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 11:11:57.093: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 11:11:59.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 11:11:59.097: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 11:12:01.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 11:12:01.093: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 11:12:03.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 11:12:03.097: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 11:12:05.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 11:12:05.101: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 11:12:07.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 11:12:07.097: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 11:12:09.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 11:12:09.104: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 11:12:11.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 11:12:11.151: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 11:12:13.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 11:12:13.123: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 11:12:15.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 11:12:15.102: INFO: Pod pod-with-prestop-exec-hook still exists Feb 20 11:12:17.084: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 20 11:12:17.102: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:12:17.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-m27rm" for this suite. Feb 20 11:12:41.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:12:41.302: INFO: namespace: e2e-tests-container-lifecycle-hook-m27rm, resource: bindings, ignored listing per whitelist Feb 20 11:12:41.374: INFO: namespace e2e-tests-container-lifecycle-hook-m27rm deletion completed in 24.240559479s • [SLOW TEST:64.635 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:12:41.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Feb 20 11:12:51.747: INFO: Pod pod-hostip-e95a7aa7-53d1-11ea-bcb7-0242ac110008 has hostIP: 10.96.1.240 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:12:51.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vzjmc" for this suite. Feb 20 11:13:15.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:13:15.955: INFO: namespace: e2e-tests-pods-vzjmc, resource: bindings, ignored listing per whitelist Feb 20 11:13:15.980: INFO: namespace e2e-tests-pods-vzjmc deletion completed in 24.22582583s • [SLOW TEST:34.606 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:13:15.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 11:13:16.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 20 11:13:16.219: INFO: stderr: "" Feb 20 11:13:16.219: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:13:16.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q57fk" for this suite. Feb 20 11:13:22.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:13:22.466: INFO: namespace: e2e-tests-kubectl-q57fk, resource: bindings, ignored listing per whitelist Feb 20 11:13:22.475: INFO: namespace e2e-tests-kubectl-q57fk deletion completed in 6.238013873s • [SLOW TEST:6.494 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:13:22.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 20 11:13:22.712: INFO: Waiting up to 5m0s for pod "pod-01d214b6-53d2-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-9p75d" to be "success or failure" Feb 20 11:13:22.723: INFO: Pod "pod-01d214b6-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.816843ms Feb 20 11:13:24.748: INFO: Pod "pod-01d214b6-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035321927s Feb 20 11:13:26.759: INFO: Pod "pod-01d214b6-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046161162s Feb 20 11:13:29.035: INFO: Pod "pod-01d214b6-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.322839714s Feb 20 11:13:31.074: INFO: Pod "pod-01d214b6-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.361192646s Feb 20 11:13:33.091: INFO: Pod "pod-01d214b6-53d2-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.378066447s STEP: Saw pod success Feb 20 11:13:33.091: INFO: Pod "pod-01d214b6-53d2-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:13:33.094: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-01d214b6-53d2-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 11:13:33.221: INFO: Waiting for pod pod-01d214b6-53d2-11ea-bcb7-0242ac110008 to disappear Feb 20 11:13:33.259: INFO: Pod pod-01d214b6-53d2-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:13:33.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9p75d" for this suite. Feb 20 11:13:39.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:13:39.394: INFO: namespace: e2e-tests-emptydir-9p75d, resource: bindings, ignored listing per whitelist Feb 20 11:13:39.492: INFO: namespace e2e-tests-emptydir-9p75d deletion completed in 6.227409504s • [SLOW TEST:17.017 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:13:39.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jksxn Feb 20 11:13:49.816: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jksxn STEP: checking the pod's current state and verifying that restartCount is present Feb 20 11:13:49.828: INFO: Initial restart count of pod liveness-http is 0 Feb 20 11:14:10.416: INFO: Restart count of pod e2e-tests-container-probe-jksxn/liveness-http is now 1 (20.587715844s elapsed) Feb 20 11:14:30.712: INFO: Restart count of pod e2e-tests-container-probe-jksxn/liveness-http is now 2 (40.884376349s elapsed) Feb 20 11:14:51.227: INFO: Restart count of pod e2e-tests-container-probe-jksxn/liveness-http is now 3 (1m1.399412612s elapsed) Feb 20 11:15:09.428: INFO: Restart count of pod e2e-tests-container-probe-jksxn/liveness-http is now 4 (1m19.600389724s elapsed) Feb 20 11:16:12.348: INFO: Restart count of pod e2e-tests-container-probe-jksxn/liveness-http is now 5 (2m22.519818662s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:16:12.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jksxn" for this suite. Feb 20 11:16:18.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:16:18.745: INFO: namespace: e2e-tests-container-probe-jksxn, resource: bindings, ignored listing per whitelist Feb 20 11:16:18.768: INFO: namespace e2e-tests-container-probe-jksxn deletion completed in 6.383350048s • [SLOW TEST:159.276 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:16:18.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-6ae99bc5-53d2-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 11:16:19.115: INFO: Waiting up to 5m0s for pod "pod-secrets-6af4ba0e-53d2-11ea-bcb7-0242ac110008" in namespace "e2e-tests-secrets-wmswt" to be "success or failure" Feb 20 11:16:19.146: INFO: Pod "pod-secrets-6af4ba0e-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 30.820065ms Feb 20 11:16:21.160: INFO: Pod "pod-secrets-6af4ba0e-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044953333s Feb 20 11:16:23.174: INFO: Pod "pod-secrets-6af4ba0e-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05847495s Feb 20 11:16:25.738: INFO: Pod "pod-secrets-6af4ba0e-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.622294225s Feb 20 11:16:27.753: INFO: Pod "pod-secrets-6af4ba0e-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.637235625s Feb 20 11:16:29.794: INFO: Pod "pod-secrets-6af4ba0e-53d2-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.678700057s STEP: Saw pod success Feb 20 11:16:29.794: INFO: Pod "pod-secrets-6af4ba0e-53d2-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:16:29.799: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6af4ba0e-53d2-11ea-bcb7-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 20 11:16:30.479: INFO: Waiting for pod pod-secrets-6af4ba0e-53d2-11ea-bcb7-0242ac110008 to disappear Feb 20 11:16:30.515: INFO: Pod pod-secrets-6af4ba0e-53d2-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:16:30.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wmswt" for this suite. Feb 20 11:16:36.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:16:36.709: INFO: namespace: e2e-tests-secrets-wmswt, resource: bindings, ignored listing per whitelist Feb 20 11:16:36.755: INFO: namespace e2e-tests-secrets-wmswt deletion completed in 6.224352822s • [SLOW TEST:17.987 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:16:36.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 11:16:36.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-62cq5' Feb 20 11:16:39.320: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 20 11:16:39.320: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Feb 20 11:16:41.459: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-xh46t] Feb 20 11:16:41.459: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-xh46t" in namespace "e2e-tests-kubectl-62cq5" to be "running and ready" Feb 20 11:16:41.468: INFO: Pod "e2e-test-nginx-rc-xh46t": Phase="Pending", Reason="", readiness=false. Elapsed: 9.419677ms Feb 20 11:16:43.490: INFO: Pod "e2e-test-nginx-rc-xh46t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031059962s Feb 20 11:16:45.588: INFO: Pod "e2e-test-nginx-rc-xh46t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129617907s Feb 20 11:16:47.599: INFO: Pod "e2e-test-nginx-rc-xh46t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139706162s Feb 20 11:16:49.615: INFO: Pod "e2e-test-nginx-rc-xh46t": Phase="Running", Reason="", readiness=true. Elapsed: 8.15623512s Feb 20 11:16:49.615: INFO: Pod "e2e-test-nginx-rc-xh46t" satisfied condition "running and ready" Feb 20 11:16:49.615: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-xh46t] Feb 20 11:16:49.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-62cq5' Feb 20 11:16:49.839: INFO: stderr: "" Feb 20 11:16:49.839: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Feb 20 11:16:49.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-62cq5' Feb 20 11:16:50.019: INFO: stderr: "" Feb 20 11:16:50.019: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:16:50.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-62cq5" for this suite. Feb 20 11:17:12.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:17:12.366: INFO: namespace: e2e-tests-kubectl-62cq5, resource: bindings, ignored listing per whitelist Feb 20 11:17:12.384: INFO: namespace e2e-tests-kubectl-62cq5 deletion completed in 22.356237163s • [SLOW TEST:35.629 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:17:12.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 11:17:12.647: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8adaa5cb-53d2-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-f2khz" to be "success or failure" Feb 20 11:17:12.657: INFO: Pod "downwardapi-volume-8adaa5cb-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.335671ms Feb 20 11:17:14.689: INFO: Pod "downwardapi-volume-8adaa5cb-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041325537s Feb 20 11:17:16.731: INFO: Pod "downwardapi-volume-8adaa5cb-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083648831s Feb 20 11:17:18.772: INFO: Pod "downwardapi-volume-8adaa5cb-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124914741s Feb 20 11:17:20.800: INFO: Pod "downwardapi-volume-8adaa5cb-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152030763s Feb 20 11:17:22.819: INFO: Pod "downwardapi-volume-8adaa5cb-53d2-11ea-bcb7-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 10.171638577s Feb 20 11:17:24.832: INFO: Pod "downwardapi-volume-8adaa5cb-53d2-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.184430809s STEP: Saw pod success Feb 20 11:17:24.832: INFO: Pod "downwardapi-volume-8adaa5cb-53d2-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:17:24.837: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8adaa5cb-53d2-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 11:17:24.997: INFO: Waiting for pod downwardapi-volume-8adaa5cb-53d2-11ea-bcb7-0242ac110008 to disappear Feb 20 11:17:25.021: INFO: Pod downwardapi-volume-8adaa5cb-53d2-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:17:25.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-f2khz" for this suite. Feb 20 11:17:31.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:17:31.287: INFO: namespace: e2e-tests-downward-api-f2khz, resource: bindings, ignored listing per whitelist Feb 20 11:17:31.346: INFO: namespace e2e-tests-downward-api-f2khz deletion completed in 6.312826113s • [SLOW TEST:18.961 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:17:31.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 20 11:17:42.232: INFO: Successfully updated pod "annotationupdate961a72b4-53d2-11ea-bcb7-0242ac110008" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:17:44.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-629gl" for this suite. Feb 20 11:18:08.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:18:08.582: INFO: namespace: e2e-tests-projected-629gl, resource: bindings, ignored listing per whitelist Feb 20 11:18:08.771: INFO: namespace e2e-tests-projected-629gl deletion completed in 24.371806949s • [SLOW TEST:37.425 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:18:08.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 20 11:18:09.090: INFO: Waiting up to 5m0s for pod "pod-ac7c603d-53d2-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-b7fr2" to be "success or failure" Feb 20 11:18:09.116: INFO: Pod "pod-ac7c603d-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 26.282834ms Feb 20 11:18:11.303: INFO: Pod "pod-ac7c603d-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213523452s Feb 20 11:18:13.322: INFO: Pod "pod-ac7c603d-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232561546s Feb 20 11:18:15.361: INFO: Pod "pod-ac7c603d-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271347997s Feb 20 11:18:17.388: INFO: Pod "pod-ac7c603d-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.298553495s Feb 20 11:18:19.409: INFO: Pod "pod-ac7c603d-53d2-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.318870512s STEP: Saw pod success Feb 20 11:18:19.409: INFO: Pod "pod-ac7c603d-53d2-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:18:19.419: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ac7c603d-53d2-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 11:18:20.042: INFO: Waiting for pod pod-ac7c603d-53d2-11ea-bcb7-0242ac110008 to disappear Feb 20 11:18:20.105: INFO: Pod pod-ac7c603d-53d2-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:18:20.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-b7fr2" for this suite. Feb 20 11:18:26.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:18:26.284: INFO: namespace: e2e-tests-emptydir-b7fr2, resource: bindings, ignored listing per whitelist Feb 20 11:18:26.370: INFO: namespace e2e-tests-emptydir-b7fr2 deletion completed in 6.246380695s • [SLOW TEST:17.599 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:18:26.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 11:18:26.574: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 20 11:18:26.737: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 20 11:18:31.944: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 20 11:18:35.958: INFO: Creating deployment "test-rolling-update-deployment" Feb 20 11:18:35.977: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 20 11:18:35.995: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 20 11:18:38.025: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 20 11:18:38.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717794316, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717794316, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717794316, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717794316, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 11:18:40.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717794316, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717794316, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717794316, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717794316, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 11:18:42.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717794316, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717794316, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717794316, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717794316, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 11:18:44.040: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 20 11:18:44.053: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-2wjwf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2wjwf/deployments/test-rolling-update-deployment,UID:bc8a0fff-53d2-11ea-a994-fa163e34d433,ResourceVersion:22302435,Generation:1,CreationTimestamp:2020-02-20 11:18:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-20 11:18:36 +0000 UTC 2020-02-20 11:18:36 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-20 11:18:43 +0000 UTC 2020-02-20 11:18:36 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 20 11:18:44.058: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-2wjwf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2wjwf/replicasets/test-rolling-update-deployment-75db98fb4c,UID:bc91d436-53d2-11ea-a994-fa163e34d433,ResourceVersion:22302426,Generation:1,CreationTimestamp:2020-02-20 11:18:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bc8a0fff-53d2-11ea-a994-fa163e34d433 0xc00150af37 0xc00150af38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 20 11:18:44.058: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 20 11:18:44.058: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-2wjwf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2wjwf/replicasets/test-rolling-update-controller,UID:b6f25b8c-53d2-11ea-a994-fa163e34d433,ResourceVersion:22302434,Generation:2,CreationTimestamp:2020-02-20 11:18:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bc8a0fff-53d2-11ea-a994-fa163e34d433 0xc00150ae77 0xc00150ae78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 11:18:44.062: INFO: Pod "test-rolling-update-deployment-75db98fb4c-nw59c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-nw59c,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-2wjwf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2wjwf/pods/test-rolling-update-deployment-75db98fb4c-nw59c,UID:bc92c43a-53d2-11ea-a994-fa163e34d433,ResourceVersion:22302424,Generation:0,CreationTimestamp:2020-02-20 11:18:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c bc91d436-53d2-11ea-a994-fa163e34d433 0xc00150b817 0xc00150b818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qr58w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qr58w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-qr58w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00150b880} {node.kubernetes.io/unreachable Exists NoExecute 0xc00150b8a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:18:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:18:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:18:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:18:36 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-20 11:18:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-20 11:18:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://97f0abbb4fdb622910e4396a991f8577fcce0ee2810fa085d2cc25125c08b855}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:18:44.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-2wjwf" for this suite. Feb 20 11:18:52.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:18:52.466: INFO: namespace: e2e-tests-deployment-2wjwf, resource: bindings, ignored listing per whitelist Feb 20 11:18:52.593: INFO: namespace e2e-tests-deployment-2wjwf deletion completed in 8.525545115s • [SLOW TEST:26.223 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:18:52.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-c72567c3-53d2-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 11:18:53.864: INFO: Waiting up to 5m0s for pod "pod-configmaps-c73044bd-53d2-11ea-bcb7-0242ac110008" in namespace "e2e-tests-configmap-swfxg" to be "success or failure" Feb 20 11:18:53.904: INFO: Pod "pod-configmaps-c73044bd-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 40.036555ms Feb 20 11:18:56.143: INFO: Pod "pod-configmaps-c73044bd-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27884867s Feb 20 11:18:58.235: INFO: Pod "pod-configmaps-c73044bd-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370745793s Feb 20 11:19:00.361: INFO: Pod "pod-configmaps-c73044bd-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.497102212s Feb 20 11:19:02.375: INFO: Pod "pod-configmaps-c73044bd-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.510401197s Feb 20 11:19:04.387: INFO: Pod "pod-configmaps-c73044bd-53d2-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.522567215s STEP: Saw pod success Feb 20 11:19:04.387: INFO: Pod "pod-configmaps-c73044bd-53d2-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:19:04.392: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c73044bd-53d2-11ea-bcb7-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 20 11:19:04.705: INFO: Waiting for pod pod-configmaps-c73044bd-53d2-11ea-bcb7-0242ac110008 to disappear Feb 20 11:19:04.733: INFO: Pod pod-configmaps-c73044bd-53d2-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:19:04.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-swfxg" for this suite. Feb 20 11:19:10.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:19:11.004: INFO: namespace: e2e-tests-configmap-swfxg, resource: bindings, ignored listing per whitelist Feb 20 11:19:11.022: INFO: namespace e2e-tests-configmap-swfxg deletion completed in 6.27595776s • [SLOW TEST:18.428 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:19:11.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:19:23.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-zzb48" for this suite. Feb 20 11:19:29.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:19:29.570: INFO: namespace: e2e-tests-kubelet-test-zzb48, resource: bindings, ignored listing per whitelist Feb 20 11:19:29.588: INFO: namespace e2e-tests-kubelet-test-zzb48 deletion completed in 6.238265964s • [SLOW TEST:18.566 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:19:29.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 20 11:19:29.688: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 20 11:19:29.696: INFO: Waiting for terminating namespaces to be deleted... Feb 20 11:19:29.699: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 20 11:19:29.709: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 20 11:19:29.709: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 20 11:19:29.709: INFO: Container weave ready: true, restart count 0 Feb 20 11:19:29.709: INFO: Container weave-npc ready: true, restart count 0 Feb 20 11:19:29.709: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 20 11:19:29.709: INFO: Container coredns ready: true, restart count 0 Feb 20 11:19:29.709: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 20 11:19:29.709: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 20 11:19:29.709: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 20 11:19:29.709: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 20 11:19:29.709: INFO: Container coredns ready: true, restart count 0 Feb 20 11:19:29.710: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 20 11:19:29.710: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f5184e920702ac], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:19:30.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-snnct" for this suite. Feb 20 11:19:37.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:19:37.170: INFO: namespace: e2e-tests-sched-pred-snnct, resource: bindings, ignored listing per whitelist Feb 20 11:19:37.227: INFO: namespace e2e-tests-sched-pred-snnct deletion completed in 6.357577126s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.639 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:19:37.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-e12fe1af-53d2-11ea-bcb7-0242ac110008 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-e12fe1af-53d2-11ea-bcb7-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:19:49.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x2mkv" for this suite. Feb 20 11:20:13.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:20:14.158: INFO: namespace: e2e-tests-projected-x2mkv, resource: bindings, ignored listing per whitelist Feb 20 11:20:14.209: INFO: namespace e2e-tests-projected-x2mkv deletion completed in 24.450427223s • [SLOW TEST:36.982 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:20:14.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-f73b6d6e-53d2-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 11:20:14.468: INFO: Waiting up to 5m0s for pod "pod-configmaps-f73da569-53d2-11ea-bcb7-0242ac110008" in namespace "e2e-tests-configmap-vdzmd" to be "success or failure" Feb 20 11:20:14.488: INFO: Pod "pod-configmaps-f73da569-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.055571ms Feb 20 11:20:16.510: INFO: Pod "pod-configmaps-f73da569-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042011131s Feb 20 11:20:18.532: INFO: Pod "pod-configmaps-f73da569-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064401431s Feb 20 11:20:20.560: INFO: Pod "pod-configmaps-f73da569-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09245719s Feb 20 11:20:22.584: INFO: Pod "pod-configmaps-f73da569-53d2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116331336s Feb 20 11:20:24.639: INFO: Pod "pod-configmaps-f73da569-53d2-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.170829891s STEP: Saw pod success Feb 20 11:20:24.639: INFO: Pod "pod-configmaps-f73da569-53d2-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:20:24.644: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f73da569-53d2-11ea-bcb7-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 20 11:20:24.928: INFO: Waiting for pod pod-configmaps-f73da569-53d2-11ea-bcb7-0242ac110008 to disappear Feb 20 11:20:24.938: INFO: Pod pod-configmaps-f73da569-53d2-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:20:24.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vdzmd" for this suite. Feb 20 11:20:30.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:20:31.112: INFO: namespace: e2e-tests-configmap-vdzmd, resource: bindings, ignored listing per whitelist Feb 20 11:20:31.117: INFO: namespace e2e-tests-configmap-vdzmd deletion completed in 6.171751478s • [SLOW TEST:16.908 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:20:31.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 11:20:31.333: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0149a02e-53d3-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-l6kxr" to be "success or failure" Feb 20 11:20:31.405: INFO: Pod "downwardapi-volume-0149a02e-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 72.290449ms Feb 20 11:20:33.623: INFO: Pod "downwardapi-volume-0149a02e-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290422894s Feb 20 11:20:35.639: INFO: Pod "downwardapi-volume-0149a02e-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30618351s Feb 20 11:20:37.678: INFO: Pod "downwardapi-volume-0149a02e-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.34524991s Feb 20 11:20:40.282: INFO: Pod "downwardapi-volume-0149a02e-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.949681329s Feb 20 11:20:42.302: INFO: Pod "downwardapi-volume-0149a02e-53d3-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.968902862s STEP: Saw pod success Feb 20 11:20:42.302: INFO: Pod "downwardapi-volume-0149a02e-53d3-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:20:42.310: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0149a02e-53d3-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 11:20:42.695: INFO: Waiting for pod downwardapi-volume-0149a02e-53d3-11ea-bcb7-0242ac110008 to disappear Feb 20 11:20:42.733: INFO: Pod downwardapi-volume-0149a02e-53d3-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:20:42.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-l6kxr" for this suite. Feb 20 11:20:48.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:20:49.073: INFO: namespace: e2e-tests-downward-api-l6kxr, resource: bindings, ignored listing per whitelist Feb 20 11:20:49.178: INFO: namespace e2e-tests-downward-api-l6kxr deletion completed in 6.426838844s • [SLOW TEST:18.061 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:20:49.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0c16d3ae-53d3-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 11:20:49.855: INFO: Waiting up to 5m0s for pod "pod-secrets-0c52239f-53d3-11ea-bcb7-0242ac110008" in namespace "e2e-tests-secrets-9gxtb" to be "success or failure" Feb 20 11:20:49.869: INFO: Pod "pod-secrets-0c52239f-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.544475ms Feb 20 11:20:51.963: INFO: Pod "pod-secrets-0c52239f-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108425869s Feb 20 11:20:54.103: INFO: Pod "pod-secrets-0c52239f-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247924097s Feb 20 11:20:56.137: INFO: Pod "pod-secrets-0c52239f-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.281789246s Feb 20 11:20:58.161: INFO: Pod "pod-secrets-0c52239f-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.306351126s Feb 20 11:21:00.212: INFO: Pod "pod-secrets-0c52239f-53d3-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.357354167s STEP: Saw pod success Feb 20 11:21:00.212: INFO: Pod "pod-secrets-0c52239f-53d3-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:21:00.219: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-0c52239f-53d3-11ea-bcb7-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 20 11:21:00.374: INFO: Waiting for pod pod-secrets-0c52239f-53d3-11ea-bcb7-0242ac110008 to disappear Feb 20 11:21:00.382: INFO: Pod pod-secrets-0c52239f-53d3-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:21:00.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9gxtb" for this suite. Feb 20 11:21:06.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:21:06.529: INFO: namespace: e2e-tests-secrets-9gxtb, resource: bindings, ignored listing per whitelist Feb 20 11:21:06.675: INFO: namespace e2e-tests-secrets-9gxtb deletion completed in 6.287404098s STEP: Destroying namespace "e2e-tests-secret-namespace-75bgr" for this suite. Feb 20 11:21:12.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:21:12.966: INFO: namespace: e2e-tests-secret-namespace-75bgr, resource: bindings, ignored listing per whitelist Feb 20 11:21:13.057: INFO: namespace e2e-tests-secret-namespace-75bgr deletion completed in 6.382373505s • [SLOW TEST:23.879 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:21:13.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 11:21:13.281: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a458bcf-53d3-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-x4c9f" to be "success or failure" Feb 20 11:21:13.327: INFO: Pod "downwardapi-volume-1a458bcf-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 45.718719ms Feb 20 11:21:15.799: INFO: Pod "downwardapi-volume-1a458bcf-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.518176374s Feb 20 11:21:17.841: INFO: Pod "downwardapi-volume-1a458bcf-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.559551781s Feb 20 11:21:19.873: INFO: Pod "downwardapi-volume-1a458bcf-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.592125573s Feb 20 11:21:21.917: INFO: Pod "downwardapi-volume-1a458bcf-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.636387896s Feb 20 11:21:23.937: INFO: Pod "downwardapi-volume-1a458bcf-53d3-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.655504539s STEP: Saw pod success Feb 20 11:21:23.937: INFO: Pod "downwardapi-volume-1a458bcf-53d3-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:21:23.948: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1a458bcf-53d3-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 11:21:24.105: INFO: Waiting for pod downwardapi-volume-1a458bcf-53d3-11ea-bcb7-0242ac110008 to disappear Feb 20 11:21:24.210: INFO: Pod downwardapi-volume-1a458bcf-53d3-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:21:24.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-x4c9f" for this suite. Feb 20 11:21:30.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:21:30.700: INFO: namespace: e2e-tests-downward-api-x4c9f, resource: bindings, ignored listing per whitelist Feb 20 11:21:30.707: INFO: namespace e2e-tests-downward-api-x4c9f deletion completed in 6.475140289s • [SLOW TEST:17.649 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:21:30.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Feb 20 11:21:31.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nhq4g' Feb 20 11:21:31.553: INFO: stderr: "" Feb 20 11:21:31.553: INFO: stdout: "pod/pause created\n" Feb 20 11:21:31.553: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 20 11:21:31.553: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-nhq4g" to be "running and ready" Feb 20 11:21:31.646: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 92.815781ms Feb 20 11:21:33.722: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169610523s Feb 20 11:21:35.737: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183777877s Feb 20 11:21:37.761: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207898039s Feb 20 11:21:39.770: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.217305996s Feb 20 11:21:39.770: INFO: Pod "pause" satisfied condition "running and ready" Feb 20 11:21:39.770: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Feb 20 11:21:39.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-nhq4g' Feb 20 11:21:40.050: INFO: stderr: "" Feb 20 11:21:40.050: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 20 11:21:40.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-nhq4g' Feb 20 11:21:40.188: INFO: stderr: "" Feb 20 11:21:40.188: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 20 11:21:40.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-nhq4g' Feb 20 11:21:40.361: INFO: stderr: "" Feb 20 11:21:40.361: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 20 11:21:40.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-nhq4g' Feb 20 11:21:40.473: INFO: stderr: "" Feb 20 11:21:40.473: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Feb 20 11:21:40.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nhq4g' Feb 20 11:21:40.655: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 11:21:40.655: INFO: stdout: "pod \"pause\" force deleted\n" Feb 20 11:21:40.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-nhq4g' Feb 20 11:21:40.775: INFO: stderr: "No resources found.\n" Feb 20 11:21:40.775: INFO: stdout: "" Feb 20 11:21:40.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-nhq4g -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 20 11:21:40.884: INFO: stderr: "" Feb 20 11:21:40.884: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:21:40.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nhq4g" for this suite. Feb 20 11:21:46.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:21:47.184: INFO: namespace: e2e-tests-kubectl-nhq4g, resource: bindings, ignored listing per whitelist Feb 20 11:21:47.201: INFO: namespace e2e-tests-kubectl-nhq4g deletion completed in 6.260935339s • [SLOW TEST:16.494 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:21:47.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-2eb30ab9-53d3-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 11:21:47.583: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ebd57bc-53d3-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-8zbdm" to be "success or failure" Feb 20 11:21:47.595: INFO: Pod "pod-projected-configmaps-2ebd57bc-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.537531ms Feb 20 11:21:49.641: INFO: Pod "pod-projected-configmaps-2ebd57bc-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057794534s Feb 20 11:21:51.660: INFO: Pod "pod-projected-configmaps-2ebd57bc-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077226641s Feb 20 11:21:53.671: INFO: Pod "pod-projected-configmaps-2ebd57bc-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08838639s Feb 20 11:21:56.020: INFO: Pod "pod-projected-configmaps-2ebd57bc-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.436888717s Feb 20 11:21:58.045: INFO: Pod "pod-projected-configmaps-2ebd57bc-53d3-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.462142908s STEP: Saw pod success Feb 20 11:21:58.045: INFO: Pod "pod-projected-configmaps-2ebd57bc-53d3-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:21:58.053: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-2ebd57bc-53d3-11ea-bcb7-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 20 11:21:58.198: INFO: Waiting for pod pod-projected-configmaps-2ebd57bc-53d3-11ea-bcb7-0242ac110008 to disappear Feb 20 11:21:58.212: INFO: Pod pod-projected-configmaps-2ebd57bc-53d3-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:21:58.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8zbdm" for this suite. Feb 20 11:22:04.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:22:04.353: INFO: namespace: e2e-tests-projected-8zbdm, resource: bindings, ignored listing per whitelist Feb 20 11:22:04.394: INFO: namespace e2e-tests-projected-8zbdm deletion completed in 6.17221049s • [SLOW TEST:17.193 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:22:04.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 11:22:04.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-46zdk' Feb 20 11:22:04.856: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 20 11:22:04.856: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Feb 20 11:22:06.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-46zdk' Feb 20 11:22:07.429: INFO: stderr: "" Feb 20 11:22:07.429: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:22:07.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-46zdk" for this suite. Feb 20 11:22:13.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:22:13.713: INFO: namespace: e2e-tests-kubectl-46zdk, resource: bindings, ignored listing per whitelist Feb 20 11:22:13.840: INFO: namespace e2e-tests-kubectl-46zdk deletion completed in 6.376952031s • [SLOW TEST:9.446 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:22:13.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 20 11:22:24.842: INFO: Successfully updated pod "pod-update-3e9d87b3-53d3-11ea-bcb7-0242ac110008" STEP: verifying the updated pod is in kubernetes Feb 20 11:22:24.862: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:22:24.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-sqn26" for this suite. Feb 20 11:22:48.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:22:49.128: INFO: namespace: e2e-tests-pods-sqn26, resource: bindings, ignored listing per whitelist Feb 20 11:22:49.236: INFO: namespace e2e-tests-pods-sqn26 deletion completed in 24.363332389s • [SLOW TEST:35.395 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:22:49.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 11:22:49.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-842ht' Feb 20 11:22:49.565: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 20 11:22:49.565: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Feb 20 11:22:49.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-842ht' Feb 20 11:22:49.764: INFO: stderr: "" Feb 20 11:22:49.764: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:22:49.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-842ht" for this suite. Feb 20 11:22:57.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:22:57.951: INFO: namespace: e2e-tests-kubectl-842ht, resource: bindings, ignored listing per whitelist Feb 20 11:22:58.102: INFO: namespace e2e-tests-kubectl-842ht deletion completed in 8.33247072s • [SLOW TEST:8.865 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:22:58.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 20 11:22:58.314: INFO: Waiting up to 5m0s for pod "pod-58e71c99-53d3-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-tx58t" to be "success or failure" Feb 20 11:22:58.317: INFO: Pod "pod-58e71c99-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.190987ms Feb 20 11:23:00.640: INFO: Pod "pod-58e71c99-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325345275s Feb 20 11:23:02.645: INFO: Pod "pod-58e71c99-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330486186s Feb 20 11:23:04.691: INFO: Pod "pod-58e71c99-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377088948s Feb 20 11:23:06.801: INFO: Pod "pod-58e71c99-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.486605895s Feb 20 11:23:08.815: INFO: Pod "pod-58e71c99-53d3-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.500622631s STEP: Saw pod success Feb 20 11:23:08.815: INFO: Pod "pod-58e71c99-53d3-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:23:08.839: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-58e71c99-53d3-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 11:23:09.418: INFO: Waiting for pod pod-58e71c99-53d3-11ea-bcb7-0242ac110008 to disappear Feb 20 11:23:09.454: INFO: Pod pod-58e71c99-53d3-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:23:09.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tx58t" for this suite. Feb 20 11:23:15.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:23:15.641: INFO: namespace: e2e-tests-emptydir-tx58t, resource: bindings, ignored listing per whitelist Feb 20 11:23:15.671: INFO: namespace e2e-tests-emptydir-tx58t deletion completed in 6.205245715s • [SLOW TEST:17.569 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:23:15.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:24:13.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-hwm76" for this suite. Feb 20 11:24:19.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:24:19.513: INFO: namespace: e2e-tests-container-runtime-hwm76, resource: bindings, ignored listing per whitelist Feb 20 11:24:19.526: INFO: namespace e2e-tests-container-runtime-hwm76 deletion completed in 6.429493924s • [SLOW TEST:63.855 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:24:19.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 20 11:24:19.765: INFO: Waiting up to 5m0s for pod "pod-89717bf3-53d3-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-nbx22" to be "success or failure" Feb 20 11:24:19.799: INFO: Pod "pod-89717bf3-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 33.911146ms Feb 20 11:24:22.010: INFO: Pod "pod-89717bf3-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244428771s Feb 20 11:24:24.033: INFO: Pod "pod-89717bf3-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268057726s Feb 20 11:24:26.633: INFO: Pod "pod-89717bf3-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.867348711s Feb 20 11:24:28.656: INFO: Pod "pod-89717bf3-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.890718262s Feb 20 11:24:30.762: INFO: Pod "pod-89717bf3-53d3-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.996746867s STEP: Saw pod success Feb 20 11:24:30.762: INFO: Pod "pod-89717bf3-53d3-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:24:30.871: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-89717bf3-53d3-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 11:24:31.157: INFO: Waiting for pod pod-89717bf3-53d3-11ea-bcb7-0242ac110008 to disappear Feb 20 11:24:31.165: INFO: Pod pod-89717bf3-53d3-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:24:31.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nbx22" for this suite. Feb 20 11:24:37.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:24:37.328: INFO: namespace: e2e-tests-emptydir-nbx22, resource: bindings, ignored listing per whitelist Feb 20 11:24:37.367: INFO: namespace e2e-tests-emptydir-nbx22 deletion completed in 6.198911475s • [SLOW TEST:17.840 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:24:37.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:24:44.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-9fvg4" for this suite. Feb 20 11:24:50.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:24:50.448: INFO: namespace: e2e-tests-namespaces-9fvg4, resource: bindings, ignored listing per whitelist Feb 20 11:24:50.699: INFO: namespace e2e-tests-namespaces-9fvg4 deletion completed in 6.436106105s STEP: Destroying namespace "e2e-tests-nsdeletetest-mszvb" for this suite. Feb 20 11:24:50.707: INFO: Namespace e2e-tests-nsdeletetest-mszvb was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-pvfjq" for this suite. Feb 20 11:24:56.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:24:56.896: INFO: namespace: e2e-tests-nsdeletetest-pvfjq, resource: bindings, ignored listing per whitelist Feb 20 11:24:56.946: INFO: namespace e2e-tests-nsdeletetest-pvfjq deletion completed in 6.239786517s • [SLOW TEST:19.579 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:24:56.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0220 11:25:00.206581 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 11:25:00.206: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:25:00.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mzhzh" for this suite. Feb 20 11:25:08.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:25:08.657: INFO: namespace: e2e-tests-gc-mzhzh, resource: bindings, ignored listing per whitelist Feb 20 11:25:08.673: INFO: namespace e2e-tests-gc-mzhzh deletion completed in 8.457243769s • [SLOW TEST:11.727 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:25:08.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 20 11:25:09.161: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-x9rgc,SelfLink:/api/v1/namespaces/e2e-tests-watch-x9rgc/configmaps/e2e-watch-test-watch-closed,UID:a6cd16f2-53d3-11ea-a994-fa163e34d433,ResourceVersion:22303437,Generation:0,CreationTimestamp:2020-02-20 11:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 20 11:25:09.161: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-x9rgc,SelfLink:/api/v1/namespaces/e2e-tests-watch-x9rgc/configmaps/e2e-watch-test-watch-closed,UID:a6cd16f2-53d3-11ea-a994-fa163e34d433,ResourceVersion:22303438,Generation:0,CreationTimestamp:2020-02-20 11:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 20 11:25:09.203: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-x9rgc,SelfLink:/api/v1/namespaces/e2e-tests-watch-x9rgc/configmaps/e2e-watch-test-watch-closed,UID:a6cd16f2-53d3-11ea-a994-fa163e34d433,ResourceVersion:22303439,Generation:0,CreationTimestamp:2020-02-20 11:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 20 11:25:09.203: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-x9rgc,SelfLink:/api/v1/namespaces/e2e-tests-watch-x9rgc/configmaps/e2e-watch-test-watch-closed,UID:a6cd16f2-53d3-11ea-a994-fa163e34d433,ResourceVersion:22303440,Generation:0,CreationTimestamp:2020-02-20 11:25:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:25:09.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-x9rgc" for this suite. Feb 20 11:25:15.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:25:15.301: INFO: namespace: e2e-tests-watch-x9rgc, resource: bindings, ignored listing per whitelist Feb 20 11:25:15.387: INFO: namespace e2e-tests-watch-x9rgc deletion completed in 6.170065664s • [SLOW TEST:6.713 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:25:15.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-aab3d7d7-53d3-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 11:25:15.568: INFO: Waiting up to 5m0s for pod "pod-configmaps-aab6f043-53d3-11ea-bcb7-0242ac110008" in namespace "e2e-tests-configmap-x9w5k" to be "success or failure" Feb 20 11:25:15.572: INFO: Pod "pod-configmaps-aab6f043-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.706841ms Feb 20 11:25:17.596: INFO: Pod "pod-configmaps-aab6f043-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027399691s Feb 20 11:25:19.612: INFO: Pod "pod-configmaps-aab6f043-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044268249s Feb 20 11:25:21.762: INFO: Pod "pod-configmaps-aab6f043-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194266405s Feb 20 11:25:23.788: INFO: Pod "pod-configmaps-aab6f043-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.220281969s Feb 20 11:25:25.813: INFO: Pod "pod-configmaps-aab6f043-53d3-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.245111089s STEP: Saw pod success Feb 20 11:25:25.813: INFO: Pod "pod-configmaps-aab6f043-53d3-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:25:25.819: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-aab6f043-53d3-11ea-bcb7-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 20 11:25:27.215: INFO: Waiting for pod pod-configmaps-aab6f043-53d3-11ea-bcb7-0242ac110008 to disappear Feb 20 11:25:27.234: INFO: Pod pod-configmaps-aab6f043-53d3-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:25:27.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-x9w5k" for this suite. Feb 20 11:25:33.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:25:33.475: INFO: namespace: e2e-tests-configmap-x9w5k, resource: bindings, ignored listing per whitelist Feb 20 11:25:33.480: INFO: namespace e2e-tests-configmap-x9w5k deletion completed in 6.228143811s • [SLOW TEST:18.093 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:25:33.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 20 11:25:34.233: INFO: Number of nodes with available pods: 0 Feb 20 11:25:34.233: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:35.691: INFO: Number of nodes with available pods: 0 Feb 20 11:25:35.692: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:36.532: INFO: Number of nodes with available pods: 0 Feb 20 11:25:36.532: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:37.271: INFO: Number of nodes with available pods: 0 Feb 20 11:25:37.271: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:38.330: INFO: Number of nodes with available pods: 0 Feb 20 11:25:38.330: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:39.815: INFO: Number of nodes with available pods: 0 Feb 20 11:25:39.815: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:40.822: INFO: Number of nodes with available pods: 0 Feb 20 11:25:40.822: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:41.351: INFO: Number of nodes with available pods: 0 Feb 20 11:25:41.351: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:42.257: INFO: Number of nodes with available pods: 0 Feb 20 11:25:42.257: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:43.252: INFO: Number of nodes with available pods: 1 Feb 20 11:25:43.252: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 20 11:25:43.372: INFO: Number of nodes with available pods: 0 Feb 20 11:25:43.372: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:44.425: INFO: Number of nodes with available pods: 0 Feb 20 11:25:44.425: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:45.398: INFO: Number of nodes with available pods: 0 Feb 20 11:25:45.398: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:46.406: INFO: Number of nodes with available pods: 0 Feb 20 11:25:46.406: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:47.390: INFO: Number of nodes with available pods: 0 Feb 20 11:25:47.390: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:48.395: INFO: Number of nodes with available pods: 0 Feb 20 11:25:48.395: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:49.471: INFO: Number of nodes with available pods: 0 Feb 20 11:25:49.471: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:50.399: INFO: Number of nodes with available pods: 0 Feb 20 11:25:50.399: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:51.398: INFO: Number of nodes with available pods: 0 Feb 20 11:25:51.398: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:52.400: INFO: Number of nodes with available pods: 0 Feb 20 11:25:52.400: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:53.389: INFO: Number of nodes with available pods: 0 Feb 20 11:25:53.389: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:54.677: INFO: Number of nodes with available pods: 0 Feb 20 11:25:54.677: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:55.468: INFO: Number of nodes with available pods: 0 Feb 20 11:25:55.468: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:56.428: INFO: Number of nodes with available pods: 0 Feb 20 11:25:56.428: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:57.513: INFO: Number of nodes with available pods: 0 Feb 20 11:25:57.513: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:25:58.707: INFO: Number of nodes with available pods: 0 Feb 20 11:25:58.707: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:26:00.445: INFO: Number of nodes with available pods: 0 Feb 20 11:26:00.445: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:26:01.408: INFO: Number of nodes with available pods: 0 Feb 20 11:26:01.408: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:26:02.405: INFO: Number of nodes with available pods: 0 Feb 20 11:26:02.405: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:26:03.429: INFO: Number of nodes with available pods: 1 Feb 20 11:26:03.429: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kxjtq, will wait for the garbage collector to delete the pods Feb 20 11:26:03.527: INFO: Deleting DaemonSet.extensions daemon-set took: 14.983969ms Feb 20 11:26:03.727: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.321433ms Feb 20 11:26:11.134: INFO: Number of nodes with available pods: 0 Feb 20 11:26:11.134: INFO: Number of running nodes: 0, number of available pods: 0 Feb 20 11:26:11.143: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kxjtq/daemonsets","resourceVersion":"22303583"},"items":null} Feb 20 11:26:11.147: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kxjtq/pods","resourceVersion":"22303583"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:26:11.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kxjtq" for this suite. Feb 20 11:26:19.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:26:19.303: INFO: namespace: e2e-tests-daemonsets-kxjtq, resource: bindings, ignored listing per whitelist Feb 20 11:26:19.355: INFO: namespace e2e-tests-daemonsets-kxjtq deletion completed in 8.195343959s • [SLOW TEST:45.875 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:26:19.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 11:26:19.590: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0e075e7-53d3-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-fg28h" to be "success or failure" Feb 20 11:26:19.618: INFO: Pod "downwardapi-volume-d0e075e7-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 27.110544ms Feb 20 11:26:21.637: INFO: Pod "downwardapi-volume-d0e075e7-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046790629s Feb 20 11:26:23.649: INFO: Pod "downwardapi-volume-d0e075e7-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05859036s Feb 20 11:26:25.925: INFO: Pod "downwardapi-volume-d0e075e7-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.334826563s Feb 20 11:26:28.369: INFO: Pod "downwardapi-volume-d0e075e7-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.779013586s Feb 20 11:26:30.392: INFO: Pod "downwardapi-volume-d0e075e7-53d3-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.801899515s STEP: Saw pod success Feb 20 11:26:30.392: INFO: Pod "downwardapi-volume-d0e075e7-53d3-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:26:30.399: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d0e075e7-53d3-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 11:26:30.678: INFO: Waiting for pod downwardapi-volume-d0e075e7-53d3-11ea-bcb7-0242ac110008 to disappear Feb 20 11:26:30.697: INFO: Pod downwardapi-volume-d0e075e7-53d3-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:26:30.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fg28h" for this suite. Feb 20 11:26:37.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:26:37.086: INFO: namespace: e2e-tests-projected-fg28h, resource: bindings, ignored listing per whitelist Feb 20 11:26:37.198: INFO: namespace e2e-tests-projected-fg28h deletion completed in 6.31107379s • [SLOW TEST:17.842 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:26:37.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Feb 20 11:26:37.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-bd2t2 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 20 11:26:48.736: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0220 11:26:46.978084 913 log.go:172] (0xc0006022c0) (0xc000886140) Create stream\nI0220 11:26:46.978159 913 log.go:172] (0xc0006022c0) (0xc000886140) Stream added, broadcasting: 1\nI0220 11:26:47.017263 913 log.go:172] (0xc0006022c0) Reply frame received for 1\nI0220 11:26:47.017414 913 log.go:172] (0xc0006022c0) (0xc0006f4d20) Create stream\nI0220 11:26:47.017446 913 log.go:172] (0xc0006022c0) (0xc0006f4d20) Stream added, broadcasting: 3\nI0220 11:26:47.020187 913 log.go:172] (0xc0006022c0) Reply frame received for 3\nI0220 11:26:47.020336 913 log.go:172] (0xc0006022c0) (0xc0008861e0) Create stream\nI0220 11:26:47.020414 913 log.go:172] (0xc0006022c0) (0xc0008861e0) Stream added, broadcasting: 5\nI0220 11:26:47.023141 913 log.go:172] (0xc0006022c0) Reply frame received for 5\nI0220 11:26:47.023301 913 log.go:172] (0xc0006022c0) (0xc00097a000) Create stream\nI0220 11:26:47.023332 913 log.go:172] (0xc0006022c0) (0xc00097a000) Stream added, broadcasting: 7\nI0220 11:26:47.024986 913 log.go:172] (0xc0006022c0) Reply frame received for 7\nI0220 11:26:47.025613 913 log.go:172] (0xc0006f4d20) (3) Writing data frame\nI0220 11:26:47.025917 913 log.go:172] (0xc0006f4d20) (3) Writing data frame\nI0220 11:26:47.067396 913 log.go:172] (0xc0006022c0) Data frame received for 5\nI0220 11:26:47.067492 913 log.go:172] (0xc0008861e0) (5) Data frame handling\nI0220 11:26:47.067531 913 log.go:172] (0xc0008861e0) (5) Data frame sent\nI0220 11:26:47.067540 913 log.go:172] (0xc0006022c0) Data frame received for 5\nI0220 11:26:47.067560 913 log.go:172] (0xc0008861e0) (5) Data frame handling\nI0220 11:26:47.067645 913 log.go:172] (0xc0008861e0) (5) Data frame sent\nI0220 11:26:48.463402 913 log.go:172] (0xc0006022c0) Data frame received for 1\nI0220 11:26:48.463828 913 log.go:172] (0xc000886140) (1) Data frame handling\nI0220 11:26:48.463886 913 log.go:172] (0xc000886140) (1) Data frame sent\nI0220 11:26:48.463915 913 log.go:172] (0xc0006022c0) (0xc000886140) Stream removed, broadcasting: 1\nI0220 11:26:48.464639 913 log.go:172] (0xc0006022c0) (0xc0006f4d20) Stream removed, broadcasting: 3\nI0220 11:26:48.465341 913 log.go:172] (0xc0006022c0) (0xc0008861e0) Stream removed, broadcasting: 5\nI0220 11:26:48.465614 913 log.go:172] (0xc0006022c0) (0xc00097a000) Stream removed, broadcasting: 7\nI0220 11:26:48.465677 913 log.go:172] (0xc0006022c0) (0xc000886140) Stream removed, broadcasting: 1\nI0220 11:26:48.465691 913 log.go:172] (0xc0006022c0) (0xc0006f4d20) Stream removed, broadcasting: 3\nI0220 11:26:48.465702 913 log.go:172] (0xc0006022c0) (0xc0008861e0) Stream removed, broadcasting: 5\nI0220 11:26:48.465713 913 log.go:172] (0xc0006022c0) (0xc00097a000) Stream removed, broadcasting: 7\nI0220 11:26:48.471165 913 log.go:172] (0xc0006022c0) Go away received\n" Feb 20 11:26:48.736: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:26:50.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bd2t2" for this suite. Feb 20 11:26:56.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:26:56.966: INFO: namespace: e2e-tests-kubectl-bd2t2, resource: bindings, ignored listing per whitelist Feb 20 11:26:57.030: INFO: namespace e2e-tests-kubectl-bd2t2 deletion completed in 6.267119939s • [SLOW TEST:19.832 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:26:57.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:27:10.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-m7jpv" for this suite. Feb 20 11:27:34.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:27:34.748: INFO: namespace: e2e-tests-replication-controller-m7jpv, resource: bindings, ignored listing per whitelist Feb 20 11:27:34.832: INFO: namespace e2e-tests-replication-controller-m7jpv deletion completed in 24.237290735s • [SLOW TEST:37.802 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:27:34.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-fde00301-53d3-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 11:27:35.108: INFO: Waiting up to 5m0s for pod "pod-secrets-fde130f2-53d3-11ea-bcb7-0242ac110008" in namespace "e2e-tests-secrets-2sb66" to be "success or failure" Feb 20 11:27:35.176: INFO: Pod "pod-secrets-fde130f2-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 67.622517ms Feb 20 11:27:37.191: INFO: Pod "pod-secrets-fde130f2-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082806598s Feb 20 11:27:39.204: INFO: Pod "pod-secrets-fde130f2-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095195834s Feb 20 11:27:41.248: INFO: Pod "pod-secrets-fde130f2-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139879581s Feb 20 11:27:43.576: INFO: Pod "pod-secrets-fde130f2-53d3-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.467481081s Feb 20 11:27:45.604: INFO: Pod "pod-secrets-fde130f2-53d3-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.495938252s STEP: Saw pod success Feb 20 11:27:45.605: INFO: Pod "pod-secrets-fde130f2-53d3-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:27:45.614: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fde130f2-53d3-11ea-bcb7-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 20 11:27:46.022: INFO: Waiting for pod pod-secrets-fde130f2-53d3-11ea-bcb7-0242ac110008 to disappear Feb 20 11:27:46.035: INFO: Pod pod-secrets-fde130f2-53d3-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:27:46.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2sb66" for this suite. Feb 20 11:27:52.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:27:52.245: INFO: namespace: e2e-tests-secrets-2sb66, resource: bindings, ignored listing per whitelist Feb 20 11:27:52.306: INFO: namespace e2e-tests-secrets-2sb66 deletion completed in 6.262905184s • [SLOW TEST:17.475 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:27:52.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-dsnq STEP: Creating a pod to test atomic-volume-subpath Feb 20 11:27:52.843: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dsnq" in namespace "e2e-tests-subpath-hl8tb" to be "success or failure" Feb 20 11:27:52.865: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Pending", Reason="", readiness=false. Elapsed: 21.903506ms Feb 20 11:27:54.916: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073543397s Feb 20 11:27:56.930: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087275863s Feb 20 11:27:58.945: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101830604s Feb 20 11:28:00.957: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11394767s Feb 20 11:28:02.968: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.125641798s Feb 20 11:28:04.991: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.14852824s Feb 20 11:28:07.023: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.180376777s Feb 20 11:28:09.077: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Running", Reason="", readiness=false. Elapsed: 16.233949515s Feb 20 11:28:11.095: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Running", Reason="", readiness=false. Elapsed: 18.252328876s Feb 20 11:28:13.121: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Running", Reason="", readiness=false. Elapsed: 20.277957211s Feb 20 11:28:15.141: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Running", Reason="", readiness=false. Elapsed: 22.29792189s Feb 20 11:28:17.167: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Running", Reason="", readiness=false. Elapsed: 24.324355175s Feb 20 11:28:19.185: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Running", Reason="", readiness=false. Elapsed: 26.342252507s Feb 20 11:28:21.201: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Running", Reason="", readiness=false. Elapsed: 28.358512909s Feb 20 11:28:23.216: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Running", Reason="", readiness=false. Elapsed: 30.373134262s Feb 20 11:28:25.245: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Running", Reason="", readiness=false. Elapsed: 32.401864174s Feb 20 11:28:27.267: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Running", Reason="", readiness=false. Elapsed: 34.424386894s Feb 20 11:28:29.283: INFO: Pod "pod-subpath-test-configmap-dsnq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.439716475s STEP: Saw pod success Feb 20 11:28:29.283: INFO: Pod "pod-subpath-test-configmap-dsnq" satisfied condition "success or failure" Feb 20 11:28:29.292: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-dsnq container test-container-subpath-configmap-dsnq: STEP: delete the pod Feb 20 11:28:29.472: INFO: Waiting for pod pod-subpath-test-configmap-dsnq to disappear Feb 20 11:28:29.482: INFO: Pod pod-subpath-test-configmap-dsnq no longer exists STEP: Deleting pod pod-subpath-test-configmap-dsnq Feb 20 11:28:29.482: INFO: Deleting pod "pod-subpath-test-configmap-dsnq" in namespace "e2e-tests-subpath-hl8tb" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:28:29.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-hl8tb" for this suite. Feb 20 11:28:35.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:28:35.721: INFO: namespace: e2e-tests-subpath-hl8tb, resource: bindings, ignored listing per whitelist Feb 20 11:28:35.863: INFO: namespace e2e-tests-subpath-hl8tb deletion completed in 6.372682137s • [SLOW TEST:43.557 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:28:35.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Feb 20 11:28:36.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 20 11:28:36.265: INFO: stderr: "" Feb 20 11:28:36.265: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:28:36.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-74v9b" for this suite. Feb 20 11:28:42.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:28:42.332: INFO: namespace: e2e-tests-kubectl-74v9b, resource: bindings, ignored listing per whitelist Feb 20 11:28:42.579: INFO: namespace e2e-tests-kubectl-74v9b deletion completed in 6.303948149s • [SLOW TEST:6.716 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:28:42.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 11:28:42.777: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Feb 20 11:28:42.783: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bg5x7/daemonsets","resourceVersion":"22303946"},"items":null} Feb 20 11:28:42.787: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bg5x7/pods","resourceVersion":"22303946"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:28:42.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-bg5x7" for this suite. Feb 20 11:28:48.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:28:48.899: INFO: namespace: e2e-tests-daemonsets-bg5x7, resource: bindings, ignored listing per whitelist Feb 20 11:28:49.025: INFO: namespace e2e-tests-daemonsets-bg5x7 deletion completed in 6.227769726s S [SKIPPING] [6.445 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 11:28:42.777: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:28:49.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-2a146834-53d4-11ea-bcb7-0242ac110008 STEP: Creating secret with name s-test-opt-upd-2a146a51-53d4-11ea-bcb7-0242ac110008 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2a146834-53d4-11ea-bcb7-0242ac110008 STEP: Updating secret s-test-opt-upd-2a146a51-53d4-11ea-bcb7-0242ac110008 STEP: Creating secret with name s-test-opt-create-2a146a68-53d4-11ea-bcb7-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:30:13.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-d8zjd" for this suite. Feb 20 11:30:31.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:30:31.558: INFO: namespace: e2e-tests-secrets-d8zjd, resource: bindings, ignored listing per whitelist Feb 20 11:30:31.567: INFO: namespace e2e-tests-secrets-d8zjd deletion completed in 18.239472182s • [SLOW TEST:102.542 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:30:31.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0220 11:30:45.593210 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 11:30:45.593: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:30:45.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-bqzks" for this suite. Feb 20 11:31:05.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:31:06.517: INFO: namespace: e2e-tests-gc-bqzks, resource: bindings, ignored listing per whitelist Feb 20 11:31:06.705: INFO: namespace e2e-tests-gc-bqzks deletion completed in 21.104628715s • [SLOW TEST:35.137 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:31:06.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:32:07.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-qfgmg" for this suite. Feb 20 11:32:31.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:32:31.235: INFO: namespace: e2e-tests-container-probe-qfgmg, resource: bindings, ignored listing per whitelist Feb 20 11:32:31.349: INFO: namespace e2e-tests-container-probe-qfgmg deletion completed in 24.193829311s • [SLOW TEST:84.643 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:32:31.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-ae8ce423-53d4-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 11:32:31.576: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ae8d5c8d-53d4-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-p9c7w" to be "success or failure" Feb 20 11:32:31.593: INFO: Pod "pod-projected-secrets-ae8d5c8d-53d4-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.160678ms Feb 20 11:32:33.627: INFO: Pod "pod-projected-secrets-ae8d5c8d-53d4-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051304039s Feb 20 11:32:35.648: INFO: Pod "pod-projected-secrets-ae8d5c8d-53d4-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072298994s Feb 20 11:32:37.777: INFO: Pod "pod-projected-secrets-ae8d5c8d-53d4-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201769827s Feb 20 11:32:39.802: INFO: Pod "pod-projected-secrets-ae8d5c8d-53d4-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226291736s Feb 20 11:32:41.849: INFO: Pod "pod-projected-secrets-ae8d5c8d-53d4-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.273322261s STEP: Saw pod success Feb 20 11:32:41.849: INFO: Pod "pod-projected-secrets-ae8d5c8d-53d4-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:32:42.122: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ae8d5c8d-53d4-11ea-bcb7-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 20 11:32:42.518: INFO: Waiting for pod pod-projected-secrets-ae8d5c8d-53d4-11ea-bcb7-0242ac110008 to disappear Feb 20 11:32:42.548: INFO: Pod pod-projected-secrets-ae8d5c8d-53d4-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:32:42.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p9c7w" for this suite. Feb 20 11:32:48.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:32:48.759: INFO: namespace: e2e-tests-projected-p9c7w, resource: bindings, ignored listing per whitelist Feb 20 11:32:48.888: INFO: namespace e2e-tests-projected-p9c7w deletion completed in 6.249455829s • [SLOW TEST:17.538 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:32:48.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:32:49.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-2rlfp" for this suite. Feb 20 11:32:55.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:32:55.412: INFO: namespace: e2e-tests-services-2rlfp, resource: bindings, ignored listing per whitelist Feb 20 11:32:55.416: INFO: namespace e2e-tests-services-2rlfp deletion completed in 6.342324158s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.528 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:32:55.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Feb 20 11:32:55.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 20 11:32:55.724: INFO: stderr: "" Feb 20 11:32:55.724: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:32:55.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-k5zds" for this suite. Feb 20 11:33:01.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:33:01.845: INFO: namespace: e2e-tests-kubectl-k5zds, resource: bindings, ignored listing per whitelist Feb 20 11:33:01.952: INFO: namespace e2e-tests-kubectl-k5zds deletion completed in 6.214826963s • [SLOW TEST:6.536 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:33:01.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-nh8r STEP: Creating a pod to test atomic-volume-subpath Feb 20 11:33:02.265: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-nh8r" in namespace "e2e-tests-subpath-mjnnh" to be "success or failure" Feb 20 11:33:02.279: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Pending", Reason="", readiness=false. Elapsed: 14.320596ms Feb 20 11:33:04.507: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242026144s Feb 20 11:33:06.524: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259282234s Feb 20 11:33:08.560: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.29489444s Feb 20 11:33:10.580: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314923028s Feb 20 11:33:12.641: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.375930854s Feb 20 11:33:14.712: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Pending", Reason="", readiness=false. Elapsed: 12.446831893s Feb 20 11:33:16.816: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Pending", Reason="", readiness=false. Elapsed: 14.551264633s Feb 20 11:33:18.833: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Pending", Reason="", readiness=false. Elapsed: 16.567855594s Feb 20 11:33:20.854: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Running", Reason="", readiness=false. Elapsed: 18.589296144s Feb 20 11:33:22.874: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Running", Reason="", readiness=false. Elapsed: 20.608712509s Feb 20 11:33:24.897: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Running", Reason="", readiness=false. Elapsed: 22.632354639s Feb 20 11:33:26.932: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Running", Reason="", readiness=false. Elapsed: 24.667295323s Feb 20 11:33:28.964: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Running", Reason="", readiness=false. Elapsed: 26.698692222s Feb 20 11:33:30.987: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Running", Reason="", readiness=false. Elapsed: 28.722413789s Feb 20 11:33:33.000: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Running", Reason="", readiness=false. Elapsed: 30.734707673s Feb 20 11:33:35.021: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Running", Reason="", readiness=false. Elapsed: 32.755757937s Feb 20 11:33:37.034: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Running", Reason="", readiness=false. Elapsed: 34.769317702s Feb 20 11:33:39.069: INFO: Pod "pod-subpath-test-downwardapi-nh8r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.804069135s STEP: Saw pod success Feb 20 11:33:39.069: INFO: Pod "pod-subpath-test-downwardapi-nh8r" satisfied condition "success or failure" Feb 20 11:33:39.097: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-nh8r container test-container-subpath-downwardapi-nh8r: STEP: delete the pod Feb 20 11:33:39.408: INFO: Waiting for pod pod-subpath-test-downwardapi-nh8r to disappear Feb 20 11:33:39.505: INFO: Pod pod-subpath-test-downwardapi-nh8r no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-nh8r Feb 20 11:33:39.506: INFO: Deleting pod "pod-subpath-test-downwardapi-nh8r" in namespace "e2e-tests-subpath-mjnnh" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:33:39.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-mjnnh" for this suite. Feb 20 11:33:47.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:33:47.671: INFO: namespace: e2e-tests-subpath-mjnnh, resource: bindings, ignored listing per whitelist Feb 20 11:33:47.717: INFO: namespace e2e-tests-subpath-mjnnh deletion completed in 8.182454687s • [SLOW TEST:45.765 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:33:47.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 20 11:33:57.985: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-dc1d628f-53d4-11ea-bcb7-0242ac110008,GenerateName:,Namespace:e2e-tests-events-dzq4l,SelfLink:/api/v1/namespaces/e2e-tests-events-dzq4l/pods/send-events-dc1d628f-53d4-11ea-bcb7-0242ac110008,UID:dc1f3034-53d4-11ea-a994-fa163e34d433,ResourceVersion:22304605,Generation:0,CreationTimestamp:2020-02-20 11:33:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 930699295,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7ppgs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7ppgs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-7ppgs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00171f5f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00171f610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:33:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:33:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:33:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:33:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-20 11:33:48 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-20 11:33:55 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://d721fc609611f2c6af4cb2b3d6a79960ea1aaf831a1a872fc013376a05d7540e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 20 11:33:59.999: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 20 11:34:02.050: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:34:02.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-dzq4l" for this suite. Feb 20 11:34:44.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:34:44.278: INFO: namespace: e2e-tests-events-dzq4l, resource: bindings, ignored listing per whitelist Feb 20 11:34:44.402: INFO: namespace e2e-tests-events-dzq4l deletion completed in 42.313403403s • [SLOW TEST:56.685 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:34:44.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 20 11:34:44.833: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:35:10.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-wmtp9" for this suite. Feb 20 11:35:48.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:35:49.022: INFO: namespace: e2e-tests-init-container-wmtp9, resource: bindings, ignored listing per whitelist Feb 20 11:35:49.139: INFO: namespace e2e-tests-init-container-wmtp9 deletion completed in 38.346571423s • [SLOW TEST:64.737 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:35:49.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 11:36:13.390: INFO: Container started at 2020-02-20 11:35:57 +0000 UTC, pod became ready at 2020-02-20 11:36:12 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:36:13.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-l7j2p" for this suite. Feb 20 11:36:37.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:36:37.565: INFO: namespace: e2e-tests-container-probe-l7j2p, resource: bindings, ignored listing per whitelist Feb 20 11:36:37.665: INFO: namespace e2e-tests-container-probe-l7j2p deletion completed in 24.268563272s • [SLOW TEST:48.525 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:36:37.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 20 11:36:37.910: INFO: Waiting up to 5m0s for pod "downward-api-41628137-53d5-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-pl9xn" to be "success or failure" Feb 20 11:36:37.943: INFO: Pod "downward-api-41628137-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 33.056142ms Feb 20 11:36:39.967: INFO: Pod "downward-api-41628137-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057422522s Feb 20 11:36:41.997: INFO: Pod "downward-api-41628137-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087384266s Feb 20 11:36:44.427: INFO: Pod "downward-api-41628137-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.517596878s Feb 20 11:36:46.447: INFO: Pod "downward-api-41628137-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.536958747s Feb 20 11:36:48.502: INFO: Pod "downward-api-41628137-53d5-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.591625491s STEP: Saw pod success Feb 20 11:36:48.502: INFO: Pod "downward-api-41628137-53d5-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:36:48.510: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-41628137-53d5-11ea-bcb7-0242ac110008 container dapi-container: STEP: delete the pod Feb 20 11:36:48.700: INFO: Waiting for pod downward-api-41628137-53d5-11ea-bcb7-0242ac110008 to disappear Feb 20 11:36:48.713: INFO: Pod downward-api-41628137-53d5-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:36:48.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pl9xn" for this suite. Feb 20 11:36:54.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:36:55.007: INFO: namespace: e2e-tests-downward-api-pl9xn, resource: bindings, ignored listing per whitelist Feb 20 11:36:55.057: INFO: namespace e2e-tests-downward-api-pl9xn deletion completed in 6.332250927s • [SLOW TEST:17.392 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:36:55.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4bc28c0e-53d5-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 11:36:55.298: INFO: Waiting up to 5m0s for pod "pod-secrets-4bc7da3b-53d5-11ea-bcb7-0242ac110008" in namespace "e2e-tests-secrets-m9ls8" to be "success or failure" Feb 20 11:36:55.396: INFO: Pod "pod-secrets-4bc7da3b-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 98.399738ms Feb 20 11:36:57.534: INFO: Pod "pod-secrets-4bc7da3b-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235584709s Feb 20 11:36:59.551: INFO: Pod "pod-secrets-4bc7da3b-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.252934703s Feb 20 11:37:01.865: INFO: Pod "pod-secrets-4bc7da3b-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.567350031s Feb 20 11:37:03.889: INFO: Pod "pod-secrets-4bc7da3b-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.590949173s Feb 20 11:37:05.905: INFO: Pod "pod-secrets-4bc7da3b-53d5-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.606631786s STEP: Saw pod success Feb 20 11:37:05.905: INFO: Pod "pod-secrets-4bc7da3b-53d5-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:37:05.915: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4bc7da3b-53d5-11ea-bcb7-0242ac110008 container secret-env-test: STEP: delete the pod Feb 20 11:37:06.595: INFO: Waiting for pod pod-secrets-4bc7da3b-53d5-11ea-bcb7-0242ac110008 to disappear Feb 20 11:37:06.650: INFO: Pod pod-secrets-4bc7da3b-53d5-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:37:06.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-m9ls8" for this suite. Feb 20 11:37:12.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:37:12.976: INFO: namespace: e2e-tests-secrets-m9ls8, resource: bindings, ignored listing per whitelist Feb 20 11:37:13.109: INFO: namespace e2e-tests-secrets-m9ls8 deletion completed in 6.373531069s • [SLOW TEST:18.051 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:37:13.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-qdjj8 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 20 11:37:13.279: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 20 11:37:47.772: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-qdjj8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 11:37:47.772: INFO: >>> kubeConfig: /root/.kube/config I0220 11:37:47.889447 8 log.go:172] (0xc001bf42c0) (0xc00250c460) Create stream I0220 11:37:47.889541 8 log.go:172] (0xc001bf42c0) (0xc00250c460) Stream added, broadcasting: 1 I0220 11:37:47.896644 8 log.go:172] (0xc001bf42c0) Reply frame received for 1 I0220 11:37:47.896692 8 log.go:172] (0xc001bf42c0) (0xc001acca00) Create stream I0220 11:37:47.896707 8 log.go:172] (0xc001bf42c0) (0xc001acca00) Stream added, broadcasting: 3 I0220 11:37:47.898327 8 log.go:172] (0xc001bf42c0) Reply frame received for 3 I0220 11:37:47.898362 8 log.go:172] (0xc001bf42c0) (0xc001accaa0) Create stream I0220 11:37:47.898374 8 log.go:172] (0xc001bf42c0) (0xc001accaa0) Stream added, broadcasting: 5 I0220 11:37:47.899948 8 log.go:172] (0xc001bf42c0) Reply frame received for 5 I0220 11:37:48.048991 8 log.go:172] (0xc001bf42c0) Data frame received for 3 I0220 11:37:48.049048 8 log.go:172] (0xc001acca00) (3) Data frame handling I0220 11:37:48.049064 8 log.go:172] (0xc001acca00) (3) Data frame sent I0220 11:37:48.173985 8 log.go:172] (0xc001bf42c0) (0xc001acca00) Stream removed, broadcasting: 3 I0220 11:37:48.174115 8 log.go:172] (0xc001bf42c0) Data frame received for 1 I0220 11:37:48.174130 8 log.go:172] (0xc00250c460) (1) Data frame handling I0220 11:37:48.174143 8 log.go:172] (0xc00250c460) (1) Data frame sent I0220 11:37:48.174149 8 log.go:172] (0xc001bf42c0) (0xc00250c460) Stream removed, broadcasting: 1 I0220 11:37:48.174224 8 log.go:172] (0xc001bf42c0) (0xc001accaa0) Stream removed, broadcasting: 5 I0220 11:37:48.174260 8 log.go:172] (0xc001bf42c0) (0xc00250c460) Stream removed, broadcasting: 1 I0220 11:37:48.174266 8 log.go:172] (0xc001bf42c0) (0xc001acca00) Stream removed, broadcasting: 3 I0220 11:37:48.174272 8 log.go:172] (0xc001bf42c0) (0xc001accaa0) Stream removed, broadcasting: 5 Feb 20 11:37:48.174: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 I0220 11:37:48.174908 8 log.go:172] (0xc001bf42c0) Go away received Feb 20 11:37:48.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-qdjj8" for this suite. Feb 20 11:38:12.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:38:12.453: INFO: namespace: e2e-tests-pod-network-test-qdjj8, resource: bindings, ignored listing per whitelist Feb 20 11:38:12.543: INFO: namespace e2e-tests-pod-network-test-qdjj8 deletion completed in 24.341632509s • [SLOW TEST:59.434 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:38:12.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-79f8e545-53d5-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 11:38:12.887: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79fd1523-53d5-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-988zw" to be "success or failure" Feb 20 11:38:12.903: INFO: Pod "pod-projected-configmaps-79fd1523-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.418504ms Feb 20 11:38:14.922: INFO: Pod "pod-projected-configmaps-79fd1523-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035809039s Feb 20 11:38:16.937: INFO: Pod "pod-projected-configmaps-79fd1523-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050061675s Feb 20 11:38:18.949: INFO: Pod "pod-projected-configmaps-79fd1523-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061892746s Feb 20 11:38:21.205: INFO: Pod "pod-projected-configmaps-79fd1523-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318702571s Feb 20 11:38:23.223: INFO: Pod "pod-projected-configmaps-79fd1523-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.336085843s Feb 20 11:38:25.236: INFO: Pod "pod-projected-configmaps-79fd1523-53d5-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.349171429s STEP: Saw pod success Feb 20 11:38:25.236: INFO: Pod "pod-projected-configmaps-79fd1523-53d5-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:38:25.240: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-79fd1523-53d5-11ea-bcb7-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 20 11:38:26.358: INFO: Waiting for pod pod-projected-configmaps-79fd1523-53d5-11ea-bcb7-0242ac110008 to disappear Feb 20 11:38:26.448: INFO: Pod pod-projected-configmaps-79fd1523-53d5-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:38:26.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-988zw" for this suite. Feb 20 11:38:34.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:38:34.704: INFO: namespace: e2e-tests-projected-988zw, resource: bindings, ignored listing per whitelist Feb 20 11:38:34.806: INFO: namespace e2e-tests-projected-988zw deletion completed in 8.323621541s • [SLOW TEST:22.263 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:38:34.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 11:38:35.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87386b4c-53d5-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-nk5zr" to be "success or failure" Feb 20 11:38:35.121: INFO: Pod "downwardapi-volume-87386b4c-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 99.239263ms Feb 20 11:38:37.408: INFO: Pod "downwardapi-volume-87386b4c-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.385676129s Feb 20 11:38:39.428: INFO: Pod "downwardapi-volume-87386b4c-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.40586215s Feb 20 11:38:41.731: INFO: Pod "downwardapi-volume-87386b4c-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.708936108s Feb 20 11:38:43.760: INFO: Pod "downwardapi-volume-87386b4c-53d5-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.738078704s Feb 20 11:38:45.779: INFO: Pod "downwardapi-volume-87386b4c-53d5-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.756401097s STEP: Saw pod success Feb 20 11:38:45.779: INFO: Pod "downwardapi-volume-87386b4c-53d5-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:38:45.784: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-87386b4c-53d5-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 11:38:47.498: INFO: Waiting for pod downwardapi-volume-87386b4c-53d5-11ea-bcb7-0242ac110008 to disappear Feb 20 11:38:47.523: INFO: Pod downwardapi-volume-87386b4c-53d5-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:38:47.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nk5zr" for this suite. Feb 20 11:38:53.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:38:53.963: INFO: namespace: e2e-tests-projected-nk5zr, resource: bindings, ignored listing per whitelist Feb 20 11:38:53.973: INFO: namespace e2e-tests-projected-nk5zr deletion completed in 6.434663307s • [SLOW TEST:19.166 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:38:53.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:39:02.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-nqr4t" for this suite. Feb 20 11:39:56.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:39:56.450: INFO: namespace: e2e-tests-kubelet-test-nqr4t, resource: bindings, ignored listing per whitelist Feb 20 11:39:56.831: INFO: namespace e2e-tests-kubelet-test-nqr4t deletion completed in 54.454154296s • [SLOW TEST:62.858 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:39:56.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 11:39:57.097: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 20 11:39:57.210: INFO: Number of nodes with available pods: 0 Feb 20 11:39:57.210: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 20 11:39:57.261: INFO: Number of nodes with available pods: 0 Feb 20 11:39:57.261: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:39:58.731: INFO: Number of nodes with available pods: 0 Feb 20 11:39:58.731: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:39:59.277: INFO: Number of nodes with available pods: 0 Feb 20 11:39:59.277: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:00.274: INFO: Number of nodes with available pods: 0 Feb 20 11:40:00.274: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:01.286: INFO: Number of nodes with available pods: 0 Feb 20 11:40:01.286: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:03.425: INFO: Number of nodes with available pods: 0 Feb 20 11:40:03.425: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:04.887: INFO: Number of nodes with available pods: 0 Feb 20 11:40:04.887: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:05.356: INFO: Number of nodes with available pods: 0 Feb 20 11:40:05.356: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:06.273: INFO: Number of nodes with available pods: 1 Feb 20 11:40:06.273: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 20 11:40:06.387: INFO: Number of nodes with available pods: 1 Feb 20 11:40:06.387: INFO: Number of running nodes: 0, number of available pods: 1 Feb 20 11:40:07.400: INFO: Number of nodes with available pods: 0 Feb 20 11:40:07.400: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 20 11:40:07.434: INFO: Number of nodes with available pods: 0 Feb 20 11:40:07.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:08.458: INFO: Number of nodes with available pods: 0 Feb 20 11:40:08.458: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:09.657: INFO: Number of nodes with available pods: 0 Feb 20 11:40:09.657: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:10.451: INFO: Number of nodes with available pods: 0 Feb 20 11:40:10.451: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:11.639: INFO: Number of nodes with available pods: 0 Feb 20 11:40:11.639: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:12.460: INFO: Number of nodes with available pods: 0 Feb 20 11:40:12.460: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:13.454: INFO: Number of nodes with available pods: 0 Feb 20 11:40:13.455: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:14.458: INFO: Number of nodes with available pods: 0 Feb 20 11:40:14.458: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:15.485: INFO: Number of nodes with available pods: 0 Feb 20 11:40:15.485: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:16.458: INFO: Number of nodes with available pods: 0 Feb 20 11:40:16.459: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:17.449: INFO: Number of nodes with available pods: 0 Feb 20 11:40:17.449: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:18.452: INFO: Number of nodes with available pods: 0 Feb 20 11:40:18.452: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:19.473: INFO: Number of nodes with available pods: 0 Feb 20 11:40:19.473: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:20.454: INFO: Number of nodes with available pods: 0 Feb 20 11:40:20.454: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:21.454: INFO: Number of nodes with available pods: 0 Feb 20 11:40:21.454: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:22.474: INFO: Number of nodes with available pods: 0 Feb 20 11:40:22.474: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:23.445: INFO: Number of nodes with available pods: 0 Feb 20 11:40:23.446: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:24.816: INFO: Number of nodes with available pods: 0 Feb 20 11:40:24.816: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:25.452: INFO: Number of nodes with available pods: 0 Feb 20 11:40:25.452: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:26.450: INFO: Number of nodes with available pods: 0 Feb 20 11:40:26.451: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:27.455: INFO: Number of nodes with available pods: 0 Feb 20 11:40:27.455: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:28.455: INFO: Number of nodes with available pods: 0 Feb 20 11:40:28.455: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:29.930: INFO: Number of nodes with available pods: 0 Feb 20 11:40:29.930: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:30.453: INFO: Number of nodes with available pods: 0 Feb 20 11:40:30.454: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:31.451: INFO: Number of nodes with available pods: 0 Feb 20 11:40:31.452: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 11:40:32.471: INFO: Number of nodes with available pods: 1 Feb 20 11:40:32.471: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-thm8s, will wait for the garbage collector to delete the pods Feb 20 11:40:32.618: INFO: Deleting DaemonSet.extensions daemon-set took: 64.430974ms Feb 20 11:40:32.818: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.656598ms Feb 20 11:40:42.643: INFO: Number of nodes with available pods: 0 Feb 20 11:40:42.643: INFO: Number of running nodes: 0, number of available pods: 0 Feb 20 11:40:42.661: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-thm8s/daemonsets","resourceVersion":"22305383"},"items":null} Feb 20 11:40:42.669: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-thm8s/pods","resourceVersion":"22305383"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:40:42.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-thm8s" for this suite. Feb 20 11:40:48.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:40:48.874: INFO: namespace: e2e-tests-daemonsets-thm8s, resource: bindings, ignored listing per whitelist Feb 20 11:40:48.944: INFO: namespace e2e-tests-daemonsets-thm8s deletion completed in 6.21262288s • [SLOW TEST:52.113 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:40:48.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-d73217d4-53d5-11ea-bcb7-0242ac110008 STEP: Creating configMap with name cm-test-opt-upd-d732183b-53d5-11ea-bcb7-0242ac110008 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d73217d4-53d5-11ea-bcb7-0242ac110008 STEP: Updating configmap cm-test-opt-upd-d732183b-53d5-11ea-bcb7-0242ac110008 STEP: Creating configMap with name cm-test-opt-create-d7321865-53d5-11ea-bcb7-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:42:24.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dpgzj" for this suite. Feb 20 11:42:48.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:42:48.374: INFO: namespace: e2e-tests-configmap-dpgzj, resource: bindings, ignored listing per whitelist Feb 20 11:42:48.684: INFO: namespace e2e-tests-configmap-dpgzj deletion completed in 24.424604844s • [SLOW TEST:119.739 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:42:48.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 20 11:43:09.029: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 20 11:43:09.096: INFO: Pod pod-with-poststart-http-hook still exists Feb 20 11:43:11.096: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 20 11:43:11.138: INFO: Pod pod-with-poststart-http-hook still exists Feb 20 11:43:13.096: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 20 11:43:13.164: INFO: Pod pod-with-poststart-http-hook still exists Feb 20 11:43:15.096: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 20 11:43:15.111: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:43:15.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dj7kp" for this suite. Feb 20 11:43:39.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:43:39.246: INFO: namespace: e2e-tests-container-lifecycle-hook-dj7kp, resource: bindings, ignored listing per whitelist Feb 20 11:43:39.308: INFO: namespace e2e-tests-container-lifecycle-hook-dj7kp deletion completed in 24.187759045s • [SLOW TEST:50.624 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:43:39.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-qbtm7 in namespace e2e-tests-proxy-q47n9 I0220 11:43:39.646861 8 runners.go:184] Created replication controller with name: proxy-service-qbtm7, namespace: e2e-tests-proxy-q47n9, replica count: 1 I0220 11:43:40.697416 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 11:43:41.697730 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 11:43:42.698137 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 11:43:43.698497 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 11:43:44.698862 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 11:43:45.699151 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 11:43:46.699803 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 11:43:47.700198 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 11:43:48.700550 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0220 11:43:49.701109 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0220 11:43:50.701515 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0220 11:43:51.701852 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0220 11:43:52.702263 8 runners.go:184] proxy-service-qbtm7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 20 11:43:52.718: INFO: setup took 13.238557529s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 20 11:43:52.761: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-q47n9/pods/proxy-service-qbtm7-ntvrk/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4glbz A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-4glbz;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4glbz A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-4glbz;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4glbz.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-4glbz.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4glbz.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-4glbz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4glbz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4glbz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4glbz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4glbz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4glbz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4glbz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4glbz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4glbz.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4glbz.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.217.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.217.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.217.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.217.160_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4glbz A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-4glbz;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4glbz A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-4glbz;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4glbz.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-4glbz.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4glbz.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-4glbz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4glbz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4glbz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4glbz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4glbz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4glbz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4glbz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4glbz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4glbz.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4glbz.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.217.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.217.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.217.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.217.160_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 20 11:44:25.664: INFO: Unable to read 10.97.217.160_tcp@PTR from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.668: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.672: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.677: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4glbz from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.685: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4glbz from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.692: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4glbz.svc from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.698: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4glbz.svc from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.708: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4glbz.svc from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.716: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4glbz.svc from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.721: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4glbz.svc from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.727: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4glbz.svc from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.731: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.734: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008) Feb 20 11:44:25.743: INFO: Lookups using e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008 failed for: [10.97.217.160_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-4glbz jessie_tcp@dns-test-service.e2e-tests-dns-4glbz jessie_udp@dns-test-service.e2e-tests-dns-4glbz.svc jessie_tcp@dns-test-service.e2e-tests-dns-4glbz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4glbz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4glbz.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4glbz.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4glbz.svc jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 20 11:44:30.949: INFO: DNS probes using e2e-tests-dns-4glbz/dns-test-4e85fbf0-53d6-11ea-bcb7-0242ac110008 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:44:31.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-4glbz" for this suite. Feb 20 11:44:38.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:44:38.591: INFO: namespace: e2e-tests-dns-4glbz, resource: bindings, ignored listing per whitelist Feb 20 11:44:38.694: INFO: namespace e2e-tests-dns-4glbz deletion completed in 7.095281962s • [SLOW TEST:29.713 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:44:38.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 20 11:44:38.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:41.172: INFO: stderr: "" Feb 20 11:44:41.172: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 11:44:41.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:41.389: INFO: stderr: "" Feb 20 11:44:41.389: INFO: stdout: "update-demo-nautilus-gd5j8 update-demo-nautilus-lf74c " Feb 20 11:44:41.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gd5j8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:41.584: INFO: stderr: "" Feb 20 11:44:41.585: INFO: stdout: "" Feb 20 11:44:41.585: INFO: update-demo-nautilus-gd5j8 is created but not running Feb 20 11:44:46.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:46.697: INFO: stderr: "" Feb 20 11:44:46.697: INFO: stdout: "update-demo-nautilus-gd5j8 update-demo-nautilus-lf74c " Feb 20 11:44:46.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gd5j8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:46.821: INFO: stderr: "" Feb 20 11:44:46.821: INFO: stdout: "" Feb 20 11:44:46.821: INFO: update-demo-nautilus-gd5j8 is created but not running Feb 20 11:44:51.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:51.942: INFO: stderr: "" Feb 20 11:44:51.942: INFO: stdout: "update-demo-nautilus-gd5j8 update-demo-nautilus-lf74c " Feb 20 11:44:51.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gd5j8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:52.087: INFO: stderr: "" Feb 20 11:44:52.087: INFO: stdout: "" Feb 20 11:44:52.088: INFO: update-demo-nautilus-gd5j8 is created but not running Feb 20 11:44:57.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:57.247: INFO: stderr: "" Feb 20 11:44:57.247: INFO: stdout: "update-demo-nautilus-gd5j8 update-demo-nautilus-lf74c " Feb 20 11:44:57.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gd5j8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:57.356: INFO: stderr: "" Feb 20 11:44:57.356: INFO: stdout: "true" Feb 20 11:44:57.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gd5j8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:57.460: INFO: stderr: "" Feb 20 11:44:57.460: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 11:44:57.460: INFO: validating pod update-demo-nautilus-gd5j8 Feb 20 11:44:57.476: INFO: got data: { "image": "nautilus.jpg" } Feb 20 11:44:57.476: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 11:44:57.476: INFO: update-demo-nautilus-gd5j8 is verified up and running Feb 20 11:44:57.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lf74c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:57.572: INFO: stderr: "" Feb 20 11:44:57.572: INFO: stdout: "true" Feb 20 11:44:57.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lf74c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:57.681: INFO: stderr: "" Feb 20 11:44:57.681: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 11:44:57.681: INFO: validating pod update-demo-nautilus-lf74c Feb 20 11:44:57.693: INFO: got data: { "image": "nautilus.jpg" } Feb 20 11:44:57.693: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 11:44:57.694: INFO: update-demo-nautilus-lf74c is verified up and running STEP: scaling down the replication controller Feb 20 11:44:57.697: INFO: scanned /root for discovery docs: Feb 20 11:44:57.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:59.066: INFO: stderr: "" Feb 20 11:44:59.066: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 11:44:59.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:44:59.279: INFO: stderr: "" Feb 20 11:44:59.279: INFO: stdout: "update-demo-nautilus-gd5j8 update-demo-nautilus-lf74c " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 20 11:45:04.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:04.464: INFO: stderr: "" Feb 20 11:45:04.464: INFO: stdout: "update-demo-nautilus-gd5j8 update-demo-nautilus-lf74c " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 20 11:45:09.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:09.626: INFO: stderr: "" Feb 20 11:45:09.626: INFO: stdout: "update-demo-nautilus-gd5j8 update-demo-nautilus-lf74c " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 20 11:45:14.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:15.213: INFO: stderr: "" Feb 20 11:45:15.213: INFO: stdout: "update-demo-nautilus-gd5j8 " Feb 20 11:45:15.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gd5j8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:15.359: INFO: stderr: "" Feb 20 11:45:15.359: INFO: stdout: "true" Feb 20 11:45:15.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gd5j8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:15.459: INFO: stderr: "" Feb 20 11:45:15.459: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 11:45:15.459: INFO: validating pod update-demo-nautilus-gd5j8 Feb 20 11:45:15.479: INFO: got data: { "image": "nautilus.jpg" } Feb 20 11:45:15.480: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 11:45:15.480: INFO: update-demo-nautilus-gd5j8 is verified up and running STEP: scaling up the replication controller Feb 20 11:45:15.483: INFO: scanned /root for discovery docs: Feb 20 11:45:15.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:16.652: INFO: stderr: "" Feb 20 11:45:16.652: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 11:45:16.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:17.216: INFO: stderr: "" Feb 20 11:45:17.216: INFO: stdout: "update-demo-nautilus-7smrq update-demo-nautilus-gd5j8 " Feb 20 11:45:17.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7smrq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:17.367: INFO: stderr: "" Feb 20 11:45:17.367: INFO: stdout: "" Feb 20 11:45:17.367: INFO: update-demo-nautilus-7smrq is created but not running Feb 20 11:45:22.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:22.586: INFO: stderr: "" Feb 20 11:45:22.586: INFO: stdout: "update-demo-nautilus-7smrq update-demo-nautilus-gd5j8 " Feb 20 11:45:22.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7smrq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:22.705: INFO: stderr: "" Feb 20 11:45:22.705: INFO: stdout: "" Feb 20 11:45:22.705: INFO: update-demo-nautilus-7smrq is created but not running Feb 20 11:45:27.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:27.911: INFO: stderr: "" Feb 20 11:45:27.912: INFO: stdout: "update-demo-nautilus-7smrq update-demo-nautilus-gd5j8 " Feb 20 11:45:27.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7smrq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:28.041: INFO: stderr: "" Feb 20 11:45:28.041: INFO: stdout: "true" Feb 20 11:45:28.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7smrq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:28.138: INFO: stderr: "" Feb 20 11:45:28.138: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 11:45:28.138: INFO: validating pod update-demo-nautilus-7smrq Feb 20 11:45:28.153: INFO: got data: { "image": "nautilus.jpg" } Feb 20 11:45:28.153: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 11:45:28.153: INFO: update-demo-nautilus-7smrq is verified up and running Feb 20 11:45:28.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gd5j8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:28.241: INFO: stderr: "" Feb 20 11:45:28.242: INFO: stdout: "true" Feb 20 11:45:28.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gd5j8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:28.373: INFO: stderr: "" Feb 20 11:45:28.373: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 11:45:28.373: INFO: validating pod update-demo-nautilus-gd5j8 Feb 20 11:45:28.386: INFO: got data: { "image": "nautilus.jpg" } Feb 20 11:45:28.386: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 11:45:28.386: INFO: update-demo-nautilus-gd5j8 is verified up and running STEP: using delete to clean up resources Feb 20 11:45:28.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:28.531: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 11:45:28.531: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 20 11:45:28.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-x9qxq' Feb 20 11:45:28.674: INFO: stderr: "No resources found.\n" Feb 20 11:45:28.674: INFO: stdout: "" Feb 20 11:45:28.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-x9qxq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 20 11:45:28.771: INFO: stderr: "" Feb 20 11:45:28.771: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:45:28.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-x9qxq" for this suite. Feb 20 11:45:52.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:45:53.047: INFO: namespace: e2e-tests-kubectl-x9qxq, resource: bindings, ignored listing per whitelist Feb 20 11:45:53.150: INFO: namespace e2e-tests-kubectl-x9qxq deletion completed in 24.340649945s • [SLOW TEST:74.455 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:45:53.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Feb 20 11:45:53.384: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-zt7fl" to be "success or failure" Feb 20 11:45:53.491: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 106.568629ms Feb 20 11:45:55.610: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226178546s Feb 20 11:45:57.646: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.262325018s Feb 20 11:46:00.341: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.957038376s Feb 20 11:46:02.389: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.005252287s Feb 20 11:46:04.398: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.014282397s Feb 20 11:46:06.427: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.042420062s Feb 20 11:46:08.444: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.060273988s STEP: Saw pod success Feb 20 11:46:08.444: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 20 11:46:08.465: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 20 11:46:08.696: INFO: Waiting for pod pod-host-path-test to disappear Feb 20 11:46:08.705: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:46:08.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-zt7fl" for this suite. Feb 20 11:46:14.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:46:14.804: INFO: namespace: e2e-tests-hostpath-zt7fl, resource: bindings, ignored listing per whitelist Feb 20 11:46:14.933: INFO: namespace e2e-tests-hostpath-zt7fl deletion completed in 6.22256343s • [SLOW TEST:21.783 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:46:14.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-99842847-53d6-11ea-bcb7-0242ac110008 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:46:27.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lhkvv" for this suite. Feb 20 11:46:51.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:46:51.455: INFO: namespace: e2e-tests-configmap-lhkvv, resource: bindings, ignored listing per whitelist Feb 20 11:46:51.577: INFO: namespace e2e-tests-configmap-lhkvv deletion completed in 24.220901301s • [SLOW TEST:36.643 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:46:51.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 20 11:46:51.904: INFO: PodSpec: initContainers in spec.initContainers Feb 20 11:48:00.008: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-af663cab-53d6-11ea-bcb7-0242ac110008", GenerateName:"", Namespace:"e2e-tests-init-container-phkbr", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-phkbr/pods/pod-init-af663cab-53d6-11ea-bcb7-0242ac110008", UID:"af6c3aeb-53d6-11ea-a994-fa163e34d433", ResourceVersion:"22306269", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717796011, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"904143495"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2c8gd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00202f5c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2c8gd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2c8gd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2c8gd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b66ac8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00184df80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b66b80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b66ba0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b66ba8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b66bac)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717796012, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717796012, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717796012, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717796011, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000d8dec0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ae6380)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ae63f0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://c9ab8a87a3434519b0fd98cc95b19bf69bb09ae00190122a5117f89f1984dfa1"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d8df00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d8dee0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:48:00.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-phkbr" for this suite. Feb 20 11:48:22.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:48:22.339: INFO: namespace: e2e-tests-init-container-phkbr, resource: bindings, ignored listing per whitelist Feb 20 11:48:22.415: INFO: namespace e2e-tests-init-container-phkbr deletion completed in 22.384512728s • [SLOW TEST:90.838 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:48:22.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 20 11:48:23.059: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-7wtcj,SelfLink:/api/v1/namespaces/e2e-tests-watch-7wtcj/configmaps/e2e-watch-test-resource-version,UID:e5a88c12-53d6-11ea-a994-fa163e34d433,ResourceVersion:22306322,Generation:0,CreationTimestamp:2020-02-20 11:48:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 20 11:48:23.059: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-7wtcj,SelfLink:/api/v1/namespaces/e2e-tests-watch-7wtcj/configmaps/e2e-watch-test-resource-version,UID:e5a88c12-53d6-11ea-a994-fa163e34d433,ResourceVersion:22306323,Generation:0,CreationTimestamp:2020-02-20 11:48:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:48:23.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-7wtcj" for this suite. Feb 20 11:48:29.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:48:29.117: INFO: namespace: e2e-tests-watch-7wtcj, resource: bindings, ignored listing per whitelist Feb 20 11:48:29.328: INFO: namespace e2e-tests-watch-7wtcj deletion completed in 6.257095225s • [SLOW TEST:6.913 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:48:29.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-e9952cf6-53d6-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 11:48:29.533: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e995ee8c-53d6-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-58pbd" to be "success or failure" Feb 20 11:48:29.644: INFO: Pod "pod-projected-secrets-e995ee8c-53d6-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 109.981253ms Feb 20 11:48:31.657: INFO: Pod "pod-projected-secrets-e995ee8c-53d6-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123486413s Feb 20 11:48:33.673: INFO: Pod "pod-projected-secrets-e995ee8c-53d6-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13919396s Feb 20 11:48:36.724: INFO: Pod "pod-projected-secrets-e995ee8c-53d6-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.190598168s Feb 20 11:48:38.762: INFO: Pod "pod-projected-secrets-e995ee8c-53d6-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.22880689s Feb 20 11:48:40.812: INFO: Pod "pod-projected-secrets-e995ee8c-53d6-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.278375577s STEP: Saw pod success Feb 20 11:48:40.812: INFO: Pod "pod-projected-secrets-e995ee8c-53d6-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:48:40.842: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e995ee8c-53d6-11ea-bcb7-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 20 11:48:41.462: INFO: Waiting for pod pod-projected-secrets-e995ee8c-53d6-11ea-bcb7-0242ac110008 to disappear Feb 20 11:48:41.468: INFO: Pod pod-projected-secrets-e995ee8c-53d6-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:48:41.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-58pbd" for this suite. Feb 20 11:48:47.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:48:47.812: INFO: namespace: e2e-tests-projected-58pbd, resource: bindings, ignored listing per whitelist Feb 20 11:48:47.966: INFO: namespace e2e-tests-projected-58pbd deletion completed in 6.493774978s • [SLOW TEST:18.637 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:48:47.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 20 11:48:48.112: INFO: Waiting up to 5m0s for pod "pod-f4a48e4f-53d6-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-qhbx5" to be "success or failure" Feb 20 11:48:48.203: INFO: Pod "pod-f4a48e4f-53d6-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 90.559192ms Feb 20 11:48:50.221: INFO: Pod "pod-f4a48e4f-53d6-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108752741s Feb 20 11:48:53.077: INFO: Pod "pod-f4a48e4f-53d6-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.96460919s Feb 20 11:48:55.100: INFO: Pod "pod-f4a48e4f-53d6-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.98773638s Feb 20 11:48:57.113: INFO: Pod "pod-f4a48e4f-53d6-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.000528193s STEP: Saw pod success Feb 20 11:48:57.113: INFO: Pod "pod-f4a48e4f-53d6-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:48:57.115: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f4a48e4f-53d6-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 11:48:57.248: INFO: Waiting for pod pod-f4a48e4f-53d6-11ea-bcb7-0242ac110008 to disappear Feb 20 11:48:57.321: INFO: Pod pod-f4a48e4f-53d6-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:48:57.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qhbx5" for this suite. Feb 20 11:49:03.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:49:03.497: INFO: namespace: e2e-tests-emptydir-qhbx5, resource: bindings, ignored listing per whitelist Feb 20 11:49:03.623: INFO: namespace e2e-tests-emptydir-qhbx5 deletion completed in 6.285412361s • [SLOW TEST:15.657 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:49:03.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:49:03.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-6pcm7" for this suite. Feb 20 11:49:28.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:49:28.401: INFO: namespace: e2e-tests-pods-6pcm7, resource: bindings, ignored listing per whitelist Feb 20 11:49:28.401: INFO: namespace e2e-tests-pods-6pcm7 deletion completed in 24.364965227s • [SLOW TEST:24.778 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:49:28.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-q7pkz [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-q7pkz STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-q7pkz Feb 20 11:49:28.736: INFO: Found 0 stateful pods, waiting for 1 Feb 20 11:49:38.755: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Feb 20 11:49:48.793: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 20 11:49:48.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 20 11:49:49.582: INFO: stderr: "I0220 11:49:49.021363 1681 log.go:172] (0xc00015c790) (0xc00062f540) Create stream\nI0220 11:49:49.021603 1681 log.go:172] (0xc00015c790) (0xc00062f540) Stream added, broadcasting: 1\nI0220 11:49:49.027963 1681 log.go:172] (0xc00015c790) Reply frame received for 1\nI0220 11:49:49.027993 1681 log.go:172] (0xc00015c790) (0xc00062f5e0) Create stream\nI0220 11:49:49.028001 1681 log.go:172] (0xc00015c790) (0xc00062f5e0) Stream added, broadcasting: 3\nI0220 11:49:49.029261 1681 log.go:172] (0xc00015c790) Reply frame received for 3\nI0220 11:49:49.029292 1681 log.go:172] (0xc00015c790) (0xc00073a000) Create stream\nI0220 11:49:49.029304 1681 log.go:172] (0xc00015c790) (0xc00073a000) Stream added, broadcasting: 5\nI0220 11:49:49.030686 1681 log.go:172] (0xc00015c790) Reply frame received for 5\nI0220 11:49:49.367302 1681 log.go:172] (0xc00015c790) Data frame received for 3\nI0220 11:49:49.367372 1681 log.go:172] (0xc00062f5e0) (3) Data frame handling\nI0220 11:49:49.367399 1681 log.go:172] (0xc00062f5e0) (3) Data frame sent\nI0220 11:49:49.573967 1681 log.go:172] (0xc00015c790) Data frame received for 1\nI0220 11:49:49.574073 1681 log.go:172] (0xc00062f540) (1) Data frame handling\nI0220 11:49:49.574102 1681 log.go:172] (0xc00062f540) (1) Data frame sent\nI0220 11:49:49.574121 1681 log.go:172] (0xc00015c790) (0xc00062f540) Stream removed, broadcasting: 1\nI0220 11:49:49.574720 1681 log.go:172] (0xc00015c790) (0xc00062f5e0) Stream removed, broadcasting: 3\nI0220 11:49:49.574758 1681 log.go:172] (0xc00015c790) (0xc00073a000) Stream removed, broadcasting: 5\nI0220 11:49:49.574812 1681 log.go:172] (0xc00015c790) (0xc00062f540) Stream removed, broadcasting: 1\nI0220 11:49:49.574826 1681 log.go:172] (0xc00015c790) (0xc00062f5e0) Stream removed, broadcasting: 3\nI0220 11:49:49.574842 1681 log.go:172] (0xc00015c790) (0xc00073a000) Stream removed, broadcasting: 5\n" Feb 20 11:49:49.583: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 20 11:49:49.583: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 20 11:49:49.605: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 20 11:49:59.625: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 20 11:49:59.625: INFO: Waiting for statefulset status.replicas updated to 0 Feb 20 11:49:59.675: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999569s Feb 20 11:50:00.708: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.976283114s Feb 20 11:50:01.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.943302811s Feb 20 11:50:02.756: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.910655401s Feb 20 11:50:03.804: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.895489132s Feb 20 11:50:04.828: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.847808076s Feb 20 11:50:05.887: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.823060305s Feb 20 11:50:07.498: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.764399271s Feb 20 11:50:08.575: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.152931158s Feb 20 11:50:09.602: INFO: Verifying statefulset ss doesn't scale past 1 for another 76.069824ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-q7pkz Feb 20 11:50:10.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:50:11.207: INFO: stderr: "I0220 11:50:10.880575 1703 log.go:172] (0xc000138630) (0xc00059d4a0) Create stream\nI0220 11:50:10.880692 1703 log.go:172] (0xc000138630) (0xc00059d4a0) Stream added, broadcasting: 1\nI0220 11:50:10.891029 1703 log.go:172] (0xc000138630) Reply frame received for 1\nI0220 11:50:10.891065 1703 log.go:172] (0xc000138630) (0xc0007a2000) Create stream\nI0220 11:50:10.891076 1703 log.go:172] (0xc000138630) (0xc0007a2000) Stream added, broadcasting: 3\nI0220 11:50:10.892330 1703 log.go:172] (0xc000138630) Reply frame received for 3\nI0220 11:50:10.892353 1703 log.go:172] (0xc000138630) (0xc00067a000) Create stream\nI0220 11:50:10.892359 1703 log.go:172] (0xc000138630) (0xc00067a000) Stream added, broadcasting: 5\nI0220 11:50:10.895156 1703 log.go:172] (0xc000138630) Reply frame received for 5\nI0220 11:50:11.016533 1703 log.go:172] (0xc000138630) Data frame received for 3\nI0220 11:50:11.016729 1703 log.go:172] (0xc0007a2000) (3) Data frame handling\nI0220 11:50:11.016767 1703 log.go:172] (0xc0007a2000) (3) Data frame sent\nI0220 11:50:11.200921 1703 log.go:172] (0xc000138630) (0xc0007a2000) Stream removed, broadcasting: 3\nI0220 11:50:11.201055 1703 log.go:172] (0xc000138630) Data frame received for 1\nI0220 11:50:11.201065 1703 log.go:172] (0xc00059d4a0) (1) Data frame handling\nI0220 11:50:11.201074 1703 log.go:172] (0xc00059d4a0) (1) Data frame sent\nI0220 11:50:11.201083 1703 log.go:172] (0xc000138630) (0xc00059d4a0) Stream removed, broadcasting: 1\nI0220 11:50:11.201328 1703 log.go:172] (0xc000138630) (0xc00067a000) Stream removed, broadcasting: 5\nI0220 11:50:11.201353 1703 log.go:172] (0xc000138630) (0xc00059d4a0) Stream removed, broadcasting: 1\nI0220 11:50:11.201359 1703 log.go:172] (0xc000138630) (0xc0007a2000) Stream removed, broadcasting: 3\nI0220 11:50:11.201367 1703 log.go:172] (0xc000138630) (0xc00067a000) Stream removed, broadcasting: 5\n" Feb 20 11:50:11.207: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 20 11:50:11.207: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 20 11:50:11.239: INFO: Found 1 stateful pods, waiting for 3 Feb 20 11:50:21.255: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 11:50:21.255: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 11:50:21.255: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 20 11:50:31.262: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 11:50:31.262: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 11:50:31.262: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 20 11:50:31.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 20 11:50:31.826: INFO: stderr: "I0220 11:50:31.490360 1725 log.go:172] (0xc0007142c0) (0xc000659400) Create stream\nI0220 11:50:31.490520 1725 log.go:172] (0xc0007142c0) (0xc000659400) Stream added, broadcasting: 1\nI0220 11:50:31.496878 1725 log.go:172] (0xc0007142c0) Reply frame received for 1\nI0220 11:50:31.496919 1725 log.go:172] (0xc0007142c0) (0xc0003b2000) Create stream\nI0220 11:50:31.496938 1725 log.go:172] (0xc0007142c0) (0xc0003b2000) Stream added, broadcasting: 3\nI0220 11:50:31.498782 1725 log.go:172] (0xc0007142c0) Reply frame received for 3\nI0220 11:50:31.498883 1725 log.go:172] (0xc0007142c0) (0xc00023a000) Create stream\nI0220 11:50:31.498900 1725 log.go:172] (0xc0007142c0) (0xc00023a000) Stream added, broadcasting: 5\nI0220 11:50:31.502209 1725 log.go:172] (0xc0007142c0) Reply frame received for 5\nI0220 11:50:31.651049 1725 log.go:172] (0xc0007142c0) Data frame received for 3\nI0220 11:50:31.651105 1725 log.go:172] (0xc0003b2000) (3) Data frame handling\nI0220 11:50:31.651121 1725 log.go:172] (0xc0003b2000) (3) Data frame sent\nI0220 11:50:31.818730 1725 log.go:172] (0xc0007142c0) (0xc0003b2000) Stream removed, broadcasting: 3\nI0220 11:50:31.818897 1725 log.go:172] (0xc0007142c0) (0xc00023a000) Stream removed, broadcasting: 5\nI0220 11:50:31.819057 1725 log.go:172] (0xc0007142c0) Data frame received for 1\nI0220 11:50:31.819139 1725 log.go:172] (0xc000659400) (1) Data frame handling\nI0220 11:50:31.819165 1725 log.go:172] (0xc000659400) (1) Data frame sent\nI0220 11:50:31.819183 1725 log.go:172] (0xc0007142c0) (0xc000659400) Stream removed, broadcasting: 1\nI0220 11:50:31.819206 1725 log.go:172] (0xc0007142c0) Go away received\nI0220 11:50:31.819711 1725 log.go:172] (0xc0007142c0) (0xc000659400) Stream removed, broadcasting: 1\nI0220 11:50:31.819764 1725 log.go:172] (0xc0007142c0) (0xc0003b2000) Stream removed, broadcasting: 3\nI0220 11:50:31.819776 1725 log.go:172] (0xc0007142c0) (0xc00023a000) Stream removed, broadcasting: 5\n" Feb 20 11:50:31.826: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 20 11:50:31.826: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 20 11:50:31.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 20 11:50:32.492: INFO: stderr: "I0220 11:50:32.108897 1747 log.go:172] (0xc000138790) (0xc0005c1360) Create stream\nI0220 11:50:32.109086 1747 log.go:172] (0xc000138790) (0xc0005c1360) Stream added, broadcasting: 1\nI0220 11:50:32.115723 1747 log.go:172] (0xc000138790) Reply frame received for 1\nI0220 11:50:32.115747 1747 log.go:172] (0xc000138790) (0xc000352000) Create stream\nI0220 11:50:32.115755 1747 log.go:172] (0xc000138790) (0xc000352000) Stream added, broadcasting: 3\nI0220 11:50:32.116658 1747 log.go:172] (0xc000138790) Reply frame received for 3\nI0220 11:50:32.116678 1747 log.go:172] (0xc000138790) (0xc0005c1400) Create stream\nI0220 11:50:32.116688 1747 log.go:172] (0xc000138790) (0xc0005c1400) Stream added, broadcasting: 5\nI0220 11:50:32.117894 1747 log.go:172] (0xc000138790) Reply frame received for 5\nI0220 11:50:32.321279 1747 log.go:172] (0xc000138790) Data frame received for 3\nI0220 11:50:32.321338 1747 log.go:172] (0xc000352000) (3) Data frame handling\nI0220 11:50:32.321351 1747 log.go:172] (0xc000352000) (3) Data frame sent\nI0220 11:50:32.484637 1747 log.go:172] (0xc000138790) Data frame received for 1\nI0220 11:50:32.485016 1747 log.go:172] (0xc000138790) (0xc000352000) Stream removed, broadcasting: 3\nI0220 11:50:32.485200 1747 log.go:172] (0xc0005c1360) (1) Data frame handling\nI0220 11:50:32.485358 1747 log.go:172] (0xc0005c1360) (1) Data frame sent\nI0220 11:50:32.485463 1747 log.go:172] (0xc000138790) (0xc0005c1400) Stream removed, broadcasting: 5\nI0220 11:50:32.485548 1747 log.go:172] (0xc000138790) (0xc0005c1360) Stream removed, broadcasting: 1\nI0220 11:50:32.485567 1747 log.go:172] (0xc000138790) Go away received\nI0220 11:50:32.485712 1747 log.go:172] (0xc000138790) (0xc0005c1360) Stream removed, broadcasting: 1\nI0220 11:50:32.485730 1747 log.go:172] (0xc000138790) (0xc000352000) Stream removed, broadcasting: 3\nI0220 11:50:32.485738 1747 log.go:172] (0xc000138790) (0xc0005c1400) Stream removed, broadcasting: 5\n" Feb 20 11:50:32.492: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 20 11:50:32.492: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 20 11:50:32.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 20 11:50:33.299: INFO: stderr: "I0220 11:50:32.706442 1769 log.go:172] (0xc0007f6160) (0xc0006f26e0) Create stream\nI0220 11:50:32.706719 1769 log.go:172] (0xc0007f6160) (0xc0006f26e0) Stream added, broadcasting: 1\nI0220 11:50:32.713557 1769 log.go:172] (0xc0007f6160) Reply frame received for 1\nI0220 11:50:32.713631 1769 log.go:172] (0xc0007f6160) (0xc0007a88c0) Create stream\nI0220 11:50:32.713641 1769 log.go:172] (0xc0007f6160) (0xc0007a88c0) Stream added, broadcasting: 3\nI0220 11:50:32.718304 1769 log.go:172] (0xc0007f6160) Reply frame received for 3\nI0220 11:50:32.718358 1769 log.go:172] (0xc0007f6160) (0xc000400d20) Create stream\nI0220 11:50:32.718377 1769 log.go:172] (0xc0007f6160) (0xc000400d20) Stream added, broadcasting: 5\nI0220 11:50:32.719494 1769 log.go:172] (0xc0007f6160) Reply frame received for 5\nI0220 11:50:33.164164 1769 log.go:172] (0xc0007f6160) Data frame received for 3\nI0220 11:50:33.164194 1769 log.go:172] (0xc0007a88c0) (3) Data frame handling\nI0220 11:50:33.164204 1769 log.go:172] (0xc0007a88c0) (3) Data frame sent\nI0220 11:50:33.294278 1769 log.go:172] (0xc0007f6160) Data frame received for 1\nI0220 11:50:33.294341 1769 log.go:172] (0xc0007f6160) (0xc0007a88c0) Stream removed, broadcasting: 3\nI0220 11:50:33.294358 1769 log.go:172] (0xc0006f26e0) (1) Data frame handling\nI0220 11:50:33.294365 1769 log.go:172] (0xc0006f26e0) (1) Data frame sent\nI0220 11:50:33.294378 1769 log.go:172] (0xc0007f6160) (0xc0006f26e0) Stream removed, broadcasting: 1\nI0220 11:50:33.294425 1769 log.go:172] (0xc0007f6160) (0xc000400d20) Stream removed, broadcasting: 5\nI0220 11:50:33.294467 1769 log.go:172] (0xc0007f6160) Go away received\nI0220 11:50:33.294493 1769 log.go:172] (0xc0007f6160) (0xc0006f26e0) Stream removed, broadcasting: 1\nI0220 11:50:33.294519 1769 log.go:172] (0xc0007f6160) (0xc0007a88c0) Stream removed, broadcasting: 3\nI0220 11:50:33.294537 1769 log.go:172] (0xc0007f6160) (0xc000400d20) Stream removed, broadcasting: 5\n" Feb 20 11:50:33.299: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 20 11:50:33.299: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 20 11:50:33.299: INFO: Waiting for statefulset status.replicas updated to 0 Feb 20 11:50:33.335: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 20 11:50:43.364: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 20 11:50:43.364: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 20 11:50:43.364: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 20 11:50:43.404: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999571s Feb 20 11:50:44.414: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985577905s Feb 20 11:50:45.461: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975318933s Feb 20 11:50:46.488: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.928881846s Feb 20 11:50:47.503: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.901738961s Feb 20 11:50:48.534: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.886521751s Feb 20 11:50:49.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.855066046s Feb 20 11:50:50.585: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.841512394s Feb 20 11:50:51.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.804721751s Feb 20 11:50:52.620: INFO: Verifying statefulset ss doesn't scale past 3 for another 792.588821ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-q7pkz Feb 20 11:50:53.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:50:54.415: INFO: stderr: "I0220 11:50:53.994541 1791 log.go:172] (0xc000708370) (0xc0007b2640) Create stream\nI0220 11:50:53.994846 1791 log.go:172] (0xc000708370) (0xc0007b2640) Stream added, broadcasting: 1\nI0220 11:50:54.004317 1791 log.go:172] (0xc000708370) Reply frame received for 1\nI0220 11:50:54.004358 1791 log.go:172] (0xc000708370) (0xc000598c80) Create stream\nI0220 11:50:54.004366 1791 log.go:172] (0xc000708370) (0xc000598c80) Stream added, broadcasting: 3\nI0220 11:50:54.005862 1791 log.go:172] (0xc000708370) Reply frame received for 3\nI0220 11:50:54.005892 1791 log.go:172] (0xc000708370) (0xc0007b26e0) Create stream\nI0220 11:50:54.005906 1791 log.go:172] (0xc000708370) (0xc0007b26e0) Stream added, broadcasting: 5\nI0220 11:50:54.007154 1791 log.go:172] (0xc000708370) Reply frame received for 5\nI0220 11:50:54.231970 1791 log.go:172] (0xc000708370) Data frame received for 3\nI0220 11:50:54.232055 1791 log.go:172] (0xc000598c80) (3) Data frame handling\nI0220 11:50:54.232077 1791 log.go:172] (0xc000598c80) (3) Data frame sent\nI0220 11:50:54.406992 1791 log.go:172] (0xc000708370) (0xc000598c80) Stream removed, broadcasting: 3\nI0220 11:50:54.407200 1791 log.go:172] (0xc000708370) Data frame received for 1\nI0220 11:50:54.407223 1791 log.go:172] (0xc0007b2640) (1) Data frame handling\nI0220 11:50:54.407235 1791 log.go:172] (0xc0007b2640) (1) Data frame sent\nI0220 11:50:54.407247 1791 log.go:172] (0xc000708370) (0xc0007b2640) Stream removed, broadcasting: 1\nI0220 11:50:54.407541 1791 log.go:172] (0xc000708370) (0xc0007b26e0) Stream removed, broadcasting: 5\nI0220 11:50:54.407580 1791 log.go:172] (0xc000708370) (0xc0007b2640) Stream removed, broadcasting: 1\nI0220 11:50:54.407598 1791 log.go:172] (0xc000708370) (0xc000598c80) Stream removed, broadcasting: 3\nI0220 11:50:54.407616 1791 log.go:172] (0xc000708370) (0xc0007b26e0) Stream removed, broadcasting: 5\nI0220 11:50:54.407828 1791 log.go:172] (0xc000708370) Go away received\n" Feb 20 11:50:54.415: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 20 11:50:54.415: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 20 11:50:54.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:50:55.256: INFO: stderr: "I0220 11:50:54.701782 1812 log.go:172] (0xc00070a370) (0xc0006432c0) Create stream\nI0220 11:50:54.702129 1812 log.go:172] (0xc00070a370) (0xc0006432c0) Stream added, broadcasting: 1\nI0220 11:50:54.782390 1812 log.go:172] (0xc00070a370) Reply frame received for 1\nI0220 11:50:54.782467 1812 log.go:172] (0xc00070a370) (0xc000643360) Create stream\nI0220 11:50:54.782475 1812 log.go:172] (0xc00070a370) (0xc000643360) Stream added, broadcasting: 3\nI0220 11:50:54.785815 1812 log.go:172] (0xc00070a370) Reply frame received for 3\nI0220 11:50:54.785891 1812 log.go:172] (0xc00070a370) (0xc0005ea000) Create stream\nI0220 11:50:54.785953 1812 log.go:172] (0xc00070a370) (0xc0005ea000) Stream added, broadcasting: 5\nI0220 11:50:54.790360 1812 log.go:172] (0xc00070a370) Reply frame received for 5\nI0220 11:50:55.034783 1812 log.go:172] (0xc00070a370) Data frame received for 3\nI0220 11:50:55.034848 1812 log.go:172] (0xc000643360) (3) Data frame handling\nI0220 11:50:55.034863 1812 log.go:172] (0xc000643360) (3) Data frame sent\nI0220 11:50:55.249896 1812 log.go:172] (0xc00070a370) (0xc000643360) Stream removed, broadcasting: 3\nI0220 11:50:55.250020 1812 log.go:172] (0xc00070a370) Data frame received for 1\nI0220 11:50:55.250031 1812 log.go:172] (0xc0006432c0) (1) Data frame handling\nI0220 11:50:55.250075 1812 log.go:172] (0xc0006432c0) (1) Data frame sent\nI0220 11:50:55.250156 1812 log.go:172] (0xc00070a370) (0xc0006432c0) Stream removed, broadcasting: 1\nI0220 11:50:55.250205 1812 log.go:172] (0xc00070a370) (0xc0005ea000) Stream removed, broadcasting: 5\nI0220 11:50:55.250270 1812 log.go:172] (0xc00070a370) Go away received\nI0220 11:50:55.250305 1812 log.go:172] (0xc00070a370) (0xc0006432c0) Stream removed, broadcasting: 1\nI0220 11:50:55.250315 1812 log.go:172] (0xc00070a370) (0xc000643360) Stream removed, broadcasting: 3\nI0220 11:50:55.250324 1812 log.go:172] (0xc00070a370) (0xc0005ea000) Stream removed, broadcasting: 5\n" Feb 20 11:50:55.256: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 20 11:50:55.256: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 20 11:50:55.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:50:55.761: INFO: rc: 126 Feb 20 11:50:55.762: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] cannot exec in a stopped state: unknown I0220 11:50:55.641105 1833 log.go:172] (0xc0006da0b0) (0xc0006fa6e0) Create stream I0220 11:50:55.641224 1833 log.go:172] (0xc0006da0b0) (0xc0006fa6e0) Stream added, broadcasting: 1 I0220 11:50:55.654011 1833 log.go:172] (0xc0006da0b0) Reply frame received for 1 I0220 11:50:55.654075 1833 log.go:172] (0xc0006da0b0) (0xc0006fa780) Create stream I0220 11:50:55.654195 1833 log.go:172] (0xc0006da0b0) (0xc0006fa780) Stream added, broadcasting: 3 I0220 11:50:55.656859 1833 log.go:172] (0xc0006da0b0) Reply frame received for 3 I0220 11:50:55.656896 1833 log.go:172] (0xc0006da0b0) (0xc00062cdc0) Create stream I0220 11:50:55.656913 1833 log.go:172] (0xc0006da0b0) (0xc00062cdc0) Stream added, broadcasting: 5 I0220 11:50:55.659663 1833 log.go:172] (0xc0006da0b0) Reply frame received for 5 I0220 11:50:55.746892 1833 log.go:172] (0xc0006da0b0) Data frame received for 3 I0220 11:50:55.746959 1833 log.go:172] (0xc0006fa780) (3) Data frame handling I0220 11:50:55.746989 1833 log.go:172] (0xc0006fa780) (3) Data frame sent I0220 11:50:55.751509 1833 log.go:172] (0xc0006da0b0) (0xc0006fa780) Stream removed, broadcasting: 3 I0220 11:50:55.751560 1833 log.go:172] (0xc0006da0b0) Data frame received for 1 I0220 11:50:55.751593 1833 log.go:172] (0xc0006fa6e0) (1) Data frame handling I0220 11:50:55.751606 1833 log.go:172] (0xc0006fa6e0) (1) Data frame sent I0220 11:50:55.751626 1833 log.go:172] (0xc0006da0b0) (0xc00062cdc0) Stream removed, broadcasting: 5 I0220 11:50:55.751665 1833 log.go:172] (0xc0006da0b0) (0xc0006fa6e0) Stream removed, broadcasting: 1 I0220 11:50:55.751694 1833 log.go:172] (0xc0006da0b0) Go away received I0220 11:50:55.752033 1833 log.go:172] (0xc0006da0b0) (0xc0006fa6e0) Stream removed, broadcasting: 1 I0220 11:50:55.752068 1833 log.go:172] (0xc0006da0b0) (0xc0006fa780) Stream removed, broadcasting: 3 I0220 11:50:55.752081 1833 log.go:172] (0xc0006da0b0) (0xc00062cdc0) Stream removed, broadcasting: 5 command terminated with exit code 126 [] 0xc002122b10 exit status 126 true [0xc000fea088 0xc000fea0a0 0xc000fea0b8] [0xc000fea088 0xc000fea0a0 0xc000fea0b8] [0xc000fea098 0xc000fea0b0] [0x935700 0x935700] 0xc0023a11a0 }: Command stdout: cannot exec in a stopped state: unknown stderr: I0220 11:50:55.641105 1833 log.go:172] (0xc0006da0b0) (0xc0006fa6e0) Create stream I0220 11:50:55.641224 1833 log.go:172] (0xc0006da0b0) (0xc0006fa6e0) Stream added, broadcasting: 1 I0220 11:50:55.654011 1833 log.go:172] (0xc0006da0b0) Reply frame received for 1 I0220 11:50:55.654075 1833 log.go:172] (0xc0006da0b0) (0xc0006fa780) Create stream I0220 11:50:55.654195 1833 log.go:172] (0xc0006da0b0) (0xc0006fa780) Stream added, broadcasting: 3 I0220 11:50:55.656859 1833 log.go:172] (0xc0006da0b0) Reply frame received for 3 I0220 11:50:55.656896 1833 log.go:172] (0xc0006da0b0) (0xc00062cdc0) Create stream I0220 11:50:55.656913 1833 log.go:172] (0xc0006da0b0) (0xc00062cdc0) Stream added, broadcasting: 5 I0220 11:50:55.659663 1833 log.go:172] (0xc0006da0b0) Reply frame received for 5 I0220 11:50:55.746892 1833 log.go:172] (0xc0006da0b0) Data frame received for 3 I0220 11:50:55.746959 1833 log.go:172] (0xc0006fa780) (3) Data frame handling I0220 11:50:55.746989 1833 log.go:172] (0xc0006fa780) (3) Data frame sent I0220 11:50:55.751509 1833 log.go:172] (0xc0006da0b0) (0xc0006fa780) Stream removed, broadcasting: 3 I0220 11:50:55.751560 1833 log.go:172] (0xc0006da0b0) Data frame received for 1 I0220 11:50:55.751593 1833 log.go:172] (0xc0006fa6e0) (1) Data frame handling I0220 11:50:55.751606 1833 log.go:172] (0xc0006fa6e0) (1) Data frame sent I0220 11:50:55.751626 1833 log.go:172] (0xc0006da0b0) (0xc00062cdc0) Stream removed, broadcasting: 5 I0220 11:50:55.751665 1833 log.go:172] (0xc0006da0b0) (0xc0006fa6e0) Stream removed, broadcasting: 1 I0220 11:50:55.751694 1833 log.go:172] (0xc0006da0b0) Go away received I0220 11:50:55.752033 1833 log.go:172] (0xc0006da0b0) (0xc0006fa6e0) Stream removed, broadcasting: 1 I0220 11:50:55.752068 1833 log.go:172] (0xc0006da0b0) (0xc0006fa780) Stream removed, broadcasting: 3 I0220 11:50:55.752081 1833 log.go:172] (0xc0006da0b0) (0xc00062cdc0) Stream removed, broadcasting: 5 command terminated with exit code 126 error: exit status 126 Feb 20 11:51:05.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:51:06.058: INFO: rc: 1 Feb 20 11:51:06.058: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001cb1b30 exit status 1 true [0xc0000de200 0xc0000de270 0xc0000de2c0] [0xc0000de200 0xc0000de270 0xc0000de2c0] [0xc0000de218 0xc0000de2b0] [0x935700 0x935700] 0xc00240b020 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 20 11:51:16.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:51:16.171: INFO: rc: 1 Feb 20 11:51:16.171: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002122c60 exit status 1 true [0xc000fea0c0 0xc000fea0d8 0xc000fea0f0] [0xc000fea0c0 0xc000fea0d8 0xc000fea0f0] [0xc000fea0d0 0xc000fea0e8] [0x935700 0x935700] 0xc0023a14a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:51:26.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:51:26.276: INFO: rc: 1 Feb 20 11:51:26.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001cb1c80 exit status 1 true [0xc0000de2c8 0xc0000de380 0xc0000de408] [0xc0000de2c8 0xc0000de380 0xc0000de408] [0xc0000de308 0xc0000de3d0] [0x935700 0x935700] 0xc00240b2c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:51:36.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:51:36.418: INFO: rc: 1 Feb 20 11:51:36.418: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001cb1e00 exit status 1 true [0xc0000de410 0xc0000de480 0xc0000de560] [0xc0000de410 0xc0000de480 0xc0000de560] [0xc0000de430 0xc0000de548] [0x935700 0x935700] 0xc00240ba40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:51:46.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:51:46.557: INFO: rc: 1 Feb 20 11:51:46.558: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001cb1f50 exit status 1 true [0xc0000de598 0xc0000de608 0xc0000de6c8] [0xc0000de598 0xc0000de608 0xc0000de6c8] [0xc0000de5d0 0xc0000de6b0] [0x935700 0x935700] 0xc00240bce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:51:56.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:51:56.666: INFO: rc: 1 Feb 20 11:51:56.666: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025737a0 exit status 1 true [0xc0019a80c8 0xc0019a80e0 0xc0019a80f8] [0xc0019a80c8 0xc0019a80e0 0xc0019a80f8] [0xc0019a80d8 0xc0019a80f0] [0x935700 0x935700] 0xc00218c360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:52:06.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:52:06.815: INFO: rc: 1 Feb 20 11:52:06.815: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e4c120 exit status 1 true [0xc0000de718 0xc0000de898 0xc0000dece8] [0xc0000de718 0xc0000de898 0xc0000dece8] [0xc0000de798 0xc0000de8c0] [0x935700 0x935700] 0xc00240bf80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:52:16.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:52:16.956: INFO: rc: 1 Feb 20 11:52:16.956: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002122ea0 exit status 1 true [0xc000fea0f8 0xc000fea110 0xc000fea128] [0xc000fea0f8 0xc000fea110 0xc000fea128] [0xc000fea108 0xc000fea120] [0x935700 0x935700] 0xc0023a1740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:52:26.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:52:27.107: INFO: rc: 1 Feb 20 11:52:27.107: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e4c2a0 exit status 1 true [0xc0000ded30 0xc0000dedc8 0xc0000dee18] [0xc0000ded30 0xc0000dedc8 0xc0000dee18] [0xc0000ded60 0xc0000dede8] [0x935700 0x935700] 0xc00210a300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:52:37.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:52:37.250: INFO: rc: 1 Feb 20 11:52:37.251: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002123020 exit status 1 true [0xc000fea130 0xc000fea148 0xc000fea160] [0xc000fea130 0xc000fea148 0xc000fea160] [0xc000fea140 0xc000fea158] [0x935700 0x935700] 0xc0023a1c20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:52:47.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:52:47.367: INFO: rc: 1 Feb 20 11:52:47.367: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001cb0540 exit status 1 true [0xc0019a8000 0xc0019a8018 0xc0019a8030] [0xc0019a8000 0xc0019a8018 0xc0019a8030] [0xc0019a8010 0xc0019a8028] [0x935700 0x935700] 0xc00240a1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:52:57.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:52:57.543: INFO: rc: 1 Feb 20 11:52:57.543: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001516120 exit status 1 true [0xc000fea000 0xc000fea018 0xc000fea030] [0xc000fea000 0xc000fea018 0xc000fea030] [0xc000fea010 0xc000fea028] [0x935700 0x935700] 0xc0022cfc20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:53:07.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:53:07.683: INFO: rc: 1 Feb 20 11:53:07.684: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e4c210 exit status 1 true [0xc0000de038 0xc0000de138 0xc0000de180] [0xc0000de038 0xc0000de138 0xc0000de180] [0xc0000de108 0xc0000de178] [0x935700 0x935700] 0xc00210a2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:53:17.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:53:17.829: INFO: rc: 1 Feb 20 11:53:17.829: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001516300 exit status 1 true [0xc000fea038 0xc000fea050 0xc000fea068] [0xc000fea038 0xc000fea050 0xc000fea068] [0xc000fea048 0xc000fea060] [0x935700 0x935700] 0xc0022cfec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:53:27.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:53:27.969: INFO: rc: 1 Feb 20 11:53:27.969: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001516510 exit status 1 true [0xc000fea070 0xc000fea088 0xc000fea0a0] [0xc000fea070 0xc000fea088 0xc000fea0a0] [0xc000fea080 0xc000fea098] [0x935700 0x935700] 0xc00218c180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:53:37.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:53:38.095: INFO: rc: 1 Feb 20 11:53:38.095: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001cb0720 exit status 1 true [0xc0019a8038 0xc0019a8050 0xc0019a8068] [0xc0019a8038 0xc0019a8050 0xc0019a8068] [0xc0019a8048 0xc0019a8060] [0x935700 0x935700] 0xc00240a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:53:48.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:53:48.191: INFO: rc: 1 Feb 20 11:53:48.191: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e4c390 exit status 1 true [0xc0000de1d0 0xc0000de200 0xc0000de270] [0xc0000de1d0 0xc0000de200 0xc0000de270] [0xc0000de1f8 0xc0000de218] [0x935700 0x935700] 0xc00210a600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:53:58.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:53:58.327: INFO: rc: 1 Feb 20 11:53:58.328: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e4c510 exit status 1 true [0xc0000de298 0xc0000de2c8 0xc0000de380] [0xc0000de298 0xc0000de2c8 0xc0000de380] [0xc0000de2c0 0xc0000de308] [0x935700 0x935700] 0xc00210a8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:54:08.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:54:08.535: INFO: rc: 1 Feb 20 11:54:08.535: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002572120 exit status 1 true [0xc000b4e000 0xc000b4e018 0xc000b4e030] [0xc000b4e000 0xc000b4e018 0xc000b4e030] [0xc000b4e010 0xc000b4e028] [0x935700 0x935700] 0xc0023a0ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:54:18.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:54:18.688: INFO: rc: 1 Feb 20 11:54:18.688: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002572270 exit status 1 true [0xc000b4e038 0xc000b4e050 0xc000b4e068] [0xc000b4e038 0xc000b4e050 0xc000b4e068] [0xc000b4e048 0xc000b4e060] [0x935700 0x935700] 0xc0023a0e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:54:28.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:54:28.787: INFO: rc: 1 Feb 20 11:54:28.787: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002572390 exit status 1 true [0xc000b4e070 0xc000b4e088 0xc000b4e0a0] [0xc000b4e070 0xc000b4e088 0xc000b4e0a0] [0xc000b4e080 0xc000b4e098] [0x935700 0x935700] 0xc0023a1200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:54:38.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:54:38.944: INFO: rc: 1 Feb 20 11:54:38.945: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025724b0 exit status 1 true [0xc000b4e0a8 0xc000b4e0c0 0xc000b4e0d8] [0xc000b4e0a8 0xc000b4e0c0 0xc000b4e0d8] [0xc000b4e0b8 0xc000b4e0d0] [0x935700 0x935700] 0xc0023a1500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:54:48.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:54:49.062: INFO: rc: 1 Feb 20 11:54:49.062: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001cb0570 exit status 1 true [0xc0019a8000 0xc0019a8018 0xc0019a8030] [0xc0019a8000 0xc0019a8018 0xc0019a8030] [0xc0019a8010 0xc0019a8028] [0x935700 0x935700] 0xc0022cfc20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:54:59.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:54:59.207: INFO: rc: 1 Feb 20 11:54:59.208: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001516150 exit status 1 true [0xc000fea000 0xc000fea018 0xc000fea030] [0xc000fea000 0xc000fea018 0xc000fea030] [0xc000fea010 0xc000fea028] [0x935700 0x935700] 0xc00210a2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:55:09.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:55:09.304: INFO: rc: 1 Feb 20 11:55:09.304: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e4c1e0 exit status 1 true [0xc0000de038 0xc0000de138 0xc0000de180] [0xc0000de038 0xc0000de138 0xc0000de180] [0xc0000de108 0xc0000de178] [0x935700 0x935700] 0xc00240a1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:55:19.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:55:19.474: INFO: rc: 1 Feb 20 11:55:19.475: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002572150 exit status 1 true [0xc000b4e000 0xc000b4e018 0xc000b4e030] [0xc000b4e000 0xc000b4e018 0xc000b4e030] [0xc000b4e010 0xc000b4e028] [0x935700 0x935700] 0xc00218c1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:55:29.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:55:29.567: INFO: rc: 1 Feb 20 11:55:29.568: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001cb0780 exit status 1 true [0xc0019a8038 0xc0019a8050 0xc0019a8068] [0xc0019a8038 0xc0019a8050 0xc0019a8068] [0xc0019a8048 0xc0019a8060] [0x935700 0x935700] 0xc0022cfec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:55:39.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:55:39.704: INFO: rc: 1 Feb 20 11:55:39.704: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001cb08d0 exit status 1 true [0xc0019a8070 0xc0019a8088 0xc0019a80a0] [0xc0019a8070 0xc0019a8088 0xc0019a80a0] [0xc0019a8080 0xc0019a8098] [0x935700 0x935700] 0xc0023a0b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:55:49.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:55:49.836: INFO: rc: 1 Feb 20 11:55:49.836: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001516360 exit status 1 true [0xc000fea038 0xc000fea050 0xc000fea068] [0xc000fea038 0xc000fea050 0xc000fea068] [0xc000fea048 0xc000fea060] [0x935700 0x935700] 0xc00210a600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 20 11:55:59.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-q7pkz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 20 11:55:59.985: INFO: rc: 1 Feb 20 11:55:59.985: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Feb 20 11:55:59.985: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 20 11:56:00.041: INFO: Deleting all statefulset in ns e2e-tests-statefulset-q7pkz Feb 20 11:56:00.045: INFO: Scaling statefulset ss to 0 Feb 20 11:56:00.070: INFO: Waiting for statefulset status.replicas updated to 0 Feb 20 11:56:00.075: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:56:00.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-q7pkz" for this suite. Feb 20 11:56:08.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:56:08.351: INFO: namespace: e2e-tests-statefulset-q7pkz, resource: bindings, ignored listing per whitelist Feb 20 11:56:08.444: INFO: namespace e2e-tests-statefulset-q7pkz deletion completed in 8.277704428s • [SLOW TEST:400.042 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:56:08.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-fb5c5ccf-53d7-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 11:56:08.878: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb5e3493-53d7-11ea-bcb7-0242ac110008" in namespace "e2e-tests-configmap-hzkhh" to be "success or failure" Feb 20 11:56:08.903: INFO: Pod "pod-configmaps-fb5e3493-53d7-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.047527ms Feb 20 11:56:10.919: INFO: Pod "pod-configmaps-fb5e3493-53d7-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041312731s Feb 20 11:56:12.932: INFO: Pod "pod-configmaps-fb5e3493-53d7-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053736636s Feb 20 11:56:15.014: INFO: Pod "pod-configmaps-fb5e3493-53d7-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135692259s Feb 20 11:56:17.038: INFO: Pod "pod-configmaps-fb5e3493-53d7-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160354439s Feb 20 11:56:19.052: INFO: Pod "pod-configmaps-fb5e3493-53d7-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.173833586s STEP: Saw pod success Feb 20 11:56:19.052: INFO: Pod "pod-configmaps-fb5e3493-53d7-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 11:56:19.058: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-fb5e3493-53d7-11ea-bcb7-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 20 11:56:19.972: INFO: Waiting for pod pod-configmaps-fb5e3493-53d7-11ea-bcb7-0242ac110008 to disappear Feb 20 11:56:20.201: INFO: Pod pod-configmaps-fb5e3493-53d7-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:56:20.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hzkhh" for this suite. Feb 20 11:56:26.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:56:26.379: INFO: namespace: e2e-tests-configmap-hzkhh, resource: bindings, ignored listing per whitelist Feb 20 11:56:26.416: INFO: namespace e2e-tests-configmap-hzkhh deletion completed in 6.199985413s • [SLOW TEST:17.972 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:56:26.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 11:56:27.055: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"061f48ec-53d8-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001f55252), BlockOwnerDeletion:(*bool)(0xc001f55253)}} Feb 20 11:56:27.171: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"0613ac36-53d8-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001f553fa), BlockOwnerDeletion:(*bool)(0xc001f553fb)}} Feb 20 11:56:27.211: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"061c876a-53d8-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0024b6702), BlockOwnerDeletion:(*bool)(0xc0024b6703)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:56:32.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-qpns7" for this suite. Feb 20 11:56:38.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:56:38.712: INFO: namespace: e2e-tests-gc-qpns7, resource: bindings, ignored listing per whitelist Feb 20 11:56:38.743: INFO: namespace e2e-tests-gc-qpns7 deletion completed in 6.418560558s • [SLOW TEST:12.326 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:56:38.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Feb 20 11:56:38.955: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 20 11:56:38.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wb554' Feb 20 11:56:41.298: INFO: stderr: "" Feb 20 11:56:41.298: INFO: stdout: "service/redis-slave created\n" Feb 20 11:56:41.299: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 20 11:56:41.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wb554' Feb 20 11:56:41.768: INFO: stderr: "" Feb 20 11:56:41.768: INFO: stdout: "service/redis-master created\n" Feb 20 11:56:41.769: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 20 11:56:41.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wb554' Feb 20 11:56:42.211: INFO: stderr: "" Feb 20 11:56:42.211: INFO: stdout: "service/frontend created\n" Feb 20 11:56:42.213: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 20 11:56:42.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wb554' Feb 20 11:56:42.810: INFO: stderr: "" Feb 20 11:56:42.810: INFO: stdout: "deployment.extensions/frontend created\n" Feb 20 11:56:42.810: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 20 11:56:42.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wb554' Feb 20 11:56:43.248: INFO: stderr: "" Feb 20 11:56:43.248: INFO: stdout: "deployment.extensions/redis-master created\n" Feb 20 11:56:43.249: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 20 11:56:43.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wb554' Feb 20 11:56:43.712: INFO: stderr: "" Feb 20 11:56:43.712: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Feb 20 11:56:43.712: INFO: Waiting for all frontend pods to be Running. Feb 20 11:57:13.764: INFO: Waiting for frontend to serve content. Feb 20 11:57:13.856: INFO: Trying to add a new entry to the guestbook. Feb 20 11:57:13.897: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 20 11:57:13.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wb554' Feb 20 11:57:14.486: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 11:57:14.486: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 20 11:57:14.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wb554' Feb 20 11:57:14.702: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 11:57:14.702: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 20 11:57:14.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wb554' Feb 20 11:57:14.864: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 11:57:14.864: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 20 11:57:14.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wb554' Feb 20 11:57:14.969: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 11:57:14.969: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 20 11:57:14.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wb554' Feb 20 11:57:15.186: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 11:57:15.186: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 20 11:57:15.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wb554' Feb 20 11:57:15.480: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 11:57:15.480: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:57:15.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wb554" for this suite. Feb 20 11:58:03.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:58:03.842: INFO: namespace: e2e-tests-kubectl-wb554, resource: bindings, ignored listing per whitelist Feb 20 11:58:03.892: INFO: namespace e2e-tests-kubectl-wb554 deletion completed in 48.33452865s • [SLOW TEST:85.148 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:58:03.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 11:58:04.257: INFO: Creating deployment "nginx-deployment" Feb 20 11:58:04.273: INFO: Waiting for observed generation 1 Feb 20 11:58:06.457: INFO: Waiting for all required pods to come up Feb 20 11:58:08.008: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 20 11:58:42.860: INFO: Waiting for deployment "nginx-deployment" to complete Feb 20 11:58:42.889: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 20 11:58:42.900: INFO: Updating deployment nginx-deployment Feb 20 11:58:42.900: INFO: Waiting for observed generation 2 Feb 20 11:58:46.939: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 20 11:58:46.959: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 20 11:58:47.300: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 20 11:58:47.331: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 20 11:58:47.331: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 20 11:58:47.335: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 20 11:58:47.359: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 20 11:58:47.359: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 20 11:58:47.393: INFO: Updating deployment nginx-deployment Feb 20 11:58:47.393: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 20 11:58:47.918: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 20 11:58:49.967: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 20 11:58:50.448: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-rprfx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rprfx/deployments/nginx-deployment,UID:4027b05b-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307747,Generation:3,CreationTimestamp:2020-02-20 11:58:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-20 11:58:44 +0000 UTC 2020-02-20 11:58:04 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-20 11:58:49 +0000 UTC 2020-02-20 11:58:49 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 20 11:58:50.771: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-rprfx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rprfx/replicasets/nginx-deployment-5c98f8fb5,UID:573101ae-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307739,Generation:3,CreationTimestamp:2020-02-20 11:58:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4027b05b-53d8-11ea-a994-fa163e34d433 0xc002765817 0xc002765818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 11:58:50.771: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 20 11:58:50.772: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-rprfx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rprfx/replicasets/nginx-deployment-85ddf47c5d,UID:402e552e-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307735,Generation:3,CreationTimestamp:2020-02-20 11:58:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4027b05b-53d8-11ea-a994-fa163e34d433 0xc0027658d7 0xc0027658d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 20 11:58:51.158: INFO: Pod "nginx-deployment-5c98f8fb5-2h74r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2h74r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-5c98f8fb5-2h74r,UID:5bb40fdf-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307757,Generation:0,CreationTimestamp:2020-02-20 11:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 573101ae-53d8-11ea-a994-fa163e34d433 0xc001d7aa77 0xc001d7aa78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7b0d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7b0f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.159: INFO: Pod "nginx-deployment-5c98f8fb5-485b6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-485b6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-5c98f8fb5-485b6,UID:57915f15-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307732,Generation:0,CreationTimestamp:2020-02-20 11:58:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 573101ae-53d8-11ea-a994-fa163e34d433 0xc001d7b167 0xc001d7b168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7b1d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7b1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-20 11:58:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.159: INFO: Pod "nginx-deployment-5c98f8fb5-4jnts" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4jnts,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-5c98f8fb5-4jnts,UID:57490e36-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307730,Generation:0,CreationTimestamp:2020-02-20 11:58:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 573101ae-53d8-11ea-a994-fa163e34d433 0xc001d7b2b7 0xc001d7b2b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7b760} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7b780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-20 11:58:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.160: INFO: Pod "nginx-deployment-5c98f8fb5-97txj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-97txj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-5c98f8fb5-97txj,UID:574903a0-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307727,Generation:0,CreationTimestamp:2020-02-20 11:58:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 573101ae-53d8-11ea-a994-fa163e34d433 0xc001d7b847 0xc001d7b848}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7b8b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7baf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-20 11:58:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.161: INFO: Pod "nginx-deployment-5c98f8fb5-9pjcs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9pjcs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-5c98f8fb5-9pjcs,UID:5be5b464-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307768,Generation:0,CreationTimestamp:2020-02-20 11:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 573101ae-53d8-11ea-a994-fa163e34d433 0xc001d7bd17 0xc001d7bd18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7bd80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7bda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.161: INFO: Pod "nginx-deployment-5c98f8fb5-dm244" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dm244,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-5c98f8fb5-dm244,UID:5737564b-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307719,Generation:0,CreationTimestamp:2020-02-20 11:58:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 573101ae-53d8-11ea-a994-fa163e34d433 0xc001d7bea0 0xc001d7bea1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d7bf50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d7bf70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-20 11:58:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.162: INFO: Pod "nginx-deployment-5c98f8fb5-k8rx8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-k8rx8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-5c98f8fb5-k8rx8,UID:5b6a311e-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307748,Generation:0,CreationTimestamp:2020-02-20 11:58:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 573101ae-53d8-11ea-a994-fa163e34d433 0xc00234e037 0xc00234e038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00234ea80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00234eaf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.162: INFO: Pod "nginx-deployment-5c98f8fb5-s72js" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-s72js,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-5c98f8fb5-s72js,UID:5be5a448-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307765,Generation:0,CreationTimestamp:2020-02-20 11:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 573101ae-53d8-11ea-a994-fa163e34d433 0xc00234ed07 0xc00234ed08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00234ed70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00234eda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.163: INFO: Pod "nginx-deployment-5c98f8fb5-smgdj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-smgdj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-5c98f8fb5-smgdj,UID:5be594aa-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307769,Generation:0,CreationTimestamp:2020-02-20 11:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 573101ae-53d8-11ea-a994-fa163e34d433 0xc00234f120 0xc00234f121}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00234f190} {node.kubernetes.io/unreachable Exists NoExecute 0xc00234f1b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.163: INFO: Pod "nginx-deployment-5c98f8fb5-vtlz9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vtlz9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-5c98f8fb5-vtlz9,UID:5be5b07f-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307766,Generation:0,CreationTimestamp:2020-02-20 11:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 573101ae-53d8-11ea-a994-fa163e34d433 0xc00234f210 0xc00234f211}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00234f2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00234f2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.163: INFO: Pod "nginx-deployment-5c98f8fb5-xwnds" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xwnds,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-5c98f8fb5-xwnds,UID:5bb3c336-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307758,Generation:0,CreationTimestamp:2020-02-20 11:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 573101ae-53d8-11ea-a994-fa163e34d433 0xc00234f340 0xc00234f341}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00234f3d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00234f4b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.164: INFO: Pod "nginx-deployment-5c98f8fb5-zn8p9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zn8p9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-5c98f8fb5-zn8p9,UID:579ad83f-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307736,Generation:0,CreationTimestamp:2020-02-20 11:58:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 573101ae-53d8-11ea-a994-fa163e34d433 0xc00234f537 0xc00234f538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00234f6c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00234f6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:43 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-20 11:58:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.165: INFO: Pod "nginx-deployment-85ddf47c5d-48vxb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-48vxb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-48vxb,UID:405ecefc-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307661,Generation:0,CreationTimestamp:2020-02-20 11:58:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc00234f7a7 0xc00234f7a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00234f810} {node.kubernetes.io/unreachable Exists NoExecute 0xc00234f830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-20 11:58:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 11:58:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2cd14e28e230a3a7aea24b8726c7b8f6cd7eff9ceab61a75db6971cc0b380425}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.165: INFO: Pod "nginx-deployment-85ddf47c5d-57sh2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-57sh2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-57sh2,UID:407021cb-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307655,Generation:0,CreationTimestamp:2020-02-20 11:58:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc00234f9e7 0xc00234f9e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00234fa50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00234fa70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-20 11:58:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 11:58:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0e51dcef9cb16aea367de7c58a491263aa1d0d0b9d407827c7de1771e75faf0c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.166: INFO: Pod "nginx-deployment-85ddf47c5d-5w2rc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5w2rc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-5w2rc,UID:405f3d98-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307638,Generation:0,CreationTimestamp:2020-02-20 11:58:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc00234fbd7 0xc00234fbd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00234fc40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00234fc60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-20 11:58:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 11:58:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d07ed8872b7a127ac0ac1aab6a394e329681e94115554831f26a6bfd7b7434ee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.166: INFO: Pod "nginx-deployment-85ddf47c5d-78cgp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-78cgp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-78cgp,UID:5be5ce17-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307763,Generation:0,CreationTimestamp:2020-02-20 11:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc00234fd27 0xc00234fd28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00234fd90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00234fdb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.166: INFO: Pod "nginx-deployment-85ddf47c5d-7rl2k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7rl2k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-7rl2k,UID:40701f71-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307676,Generation:0,CreationTimestamp:2020-02-20 11:58:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc00234fe10 0xc00234fe11}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00234fe70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00234fe90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-02-20 11:58:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 11:58:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d4d0be8827f2f0eddecd1925dcde38d796d443e7f2a58d4fba7cbf39e9e80742}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.167: INFO: Pod "nginx-deployment-85ddf47c5d-8c769" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8c769,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-8c769,UID:5b67404d-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307750,Generation:0,CreationTimestamp:2020-02-20 11:58:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc00234ff87 0xc00234ff88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00234fff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023161b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.167: INFO: Pod "nginx-deployment-85ddf47c5d-fkndm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fkndm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-fkndm,UID:40504eba-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307640,Generation:0,CreationTimestamp:2020-02-20 11:58:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc002316227 0xc002316228}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023167d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023167f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-20 11:58:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 11:58:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://823a0c1a5f2035e0b92d170e76f7597a4617bca2599b3545d64b6e887fb93cd4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.167: INFO: Pod "nginx-deployment-85ddf47c5d-fnf4t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fnf4t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-fnf4t,UID:5be546f9-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307762,Generation:0,CreationTimestamp:2020-02-20 11:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc0023168b7 0xc0023168b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002316920} {node.kubernetes.io/unreachable Exists NoExecute 0xc002316940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.168: INFO: Pod "nginx-deployment-85ddf47c5d-jl8k8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jl8k8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-jl8k8,UID:5be56d6c-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307764,Generation:0,CreationTimestamp:2020-02-20 11:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc0023169a0 0xc0023169a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002316a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002316a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.168: INFO: Pod "nginx-deployment-85ddf47c5d-k5sj2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-k5sj2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-k5sj2,UID:40555b57-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307644,Generation:0,CreationTimestamp:2020-02-20 11:58:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc002316a80 0xc002316a81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002316ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002316b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-20 11:58:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 11:58:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bcdea468bf5bc3cabc1686f1bc0d608b1a76e1b148b57b399182ae2137f566a6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.168: INFO: Pod "nginx-deployment-85ddf47c5d-ldjp6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ldjp6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-ldjp6,UID:405e316e-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307650,Generation:0,CreationTimestamp:2020-02-20 11:58:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc002316bc7 0xc002316bc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002316cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002316cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-20 11:58:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 11:58:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://70420e0127776c8d6607df855ff3caa2cc3536fcb11de84101e8cfc44b9e0639}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.169: INFO: Pod "nginx-deployment-85ddf47c5d-n22fj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n22fj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-n22fj,UID:5bb4447f-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307756,Generation:0,CreationTimestamp:2020-02-20 11:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc002316d97 0xc002316d98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002316e00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002316e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.169: INFO: Pod "nginx-deployment-85ddf47c5d-s2hx2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s2hx2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-s2hx2,UID:5bb4078e-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307759,Generation:0,CreationTimestamp:2020-02-20 11:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc002316ea7 0xc002316ea8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002316f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002316f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:50 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.170: INFO: Pod "nginx-deployment-85ddf47c5d-xpr7v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xpr7v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-xpr7v,UID:5be57517-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307767,Generation:0,CreationTimestamp:2020-02-20 11:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc0023170f7 0xc0023170f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002317330} {node.kubernetes.io/unreachable Exists NoExecute 0xc002317350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 20 11:58:51.170: INFO: Pod "nginx-deployment-85ddf47c5d-zpgvz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zpgvz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rprfx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rprfx/pods/nginx-deployment-85ddf47c5d-zpgvz,UID:405658eb-53d8-11ea-a994-fa163e34d433,ResourceVersion:22307669,Generation:0,CreationTimestamp:2020-02-20 11:58:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 402e552e-53d8-11ea-a994-fa163e34d433 0xc0023173b0 0xc0023173b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-79nvd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79nvd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-79nvd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002317410} {node.kubernetes.io/unreachable Exists NoExecute 0xc002317430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 11:58:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-20 11:58:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-20 11:58:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://743406c6716ea1a72605800ee16bd34047010dc2f5a68436806c81de59da2b3e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 11:58:51.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-rprfx" for this suite. Feb 20 11:59:50.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 11:59:50.834: INFO: namespace: e2e-tests-deployment-rprfx, resource: bindings, ignored listing per whitelist Feb 20 11:59:51.171: INFO: namespace e2e-tests-deployment-rprfx deletion completed in 58.862549669s • [SLOW TEST:107.279 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 11:59:51.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 20 11:59:52.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qnvm6' Feb 20 11:59:53.224: INFO: stderr: "" Feb 20 11:59:53.224: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 20 11:59:54.234: INFO: Selector matched 1 pods for map[app:redis] Feb 20 11:59:54.234: INFO: Found 0 / 1 Feb 20 11:59:56.690: INFO: Selector matched 1 pods for map[app:redis] Feb 20 11:59:56.690: INFO: Found 0 / 1 Feb 20 11:59:57.237: INFO: Selector matched 1 pods for map[app:redis] Feb 20 11:59:57.237: INFO: Found 0 / 1 Feb 20 11:59:59.161: INFO: Selector matched 1 pods for map[app:redis] Feb 20 11:59:59.161: INFO: Found 0 / 1 Feb 20 11:59:59.241: INFO: Selector matched 1 pods for map[app:redis] Feb 20 11:59:59.241: INFO: Found 0 / 1 Feb 20 12:00:00.237: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:00.237: INFO: Found 0 / 1 Feb 20 12:00:01.247: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:01.248: INFO: Found 0 / 1 Feb 20 12:00:02.240: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:02.240: INFO: Found 0 / 1 Feb 20 12:00:03.237: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:03.237: INFO: Found 0 / 1 Feb 20 12:00:04.236: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:04.236: INFO: Found 0 / 1 Feb 20 12:00:05.684: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:05.684: INFO: Found 0 / 1 Feb 20 12:00:06.242: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:06.243: INFO: Found 0 / 1 Feb 20 12:00:07.239: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:07.239: INFO: Found 0 / 1 Feb 20 12:00:08.235: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:08.235: INFO: Found 0 / 1 Feb 20 12:00:09.246: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:09.246: INFO: Found 0 / 1 Feb 20 12:00:10.234: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:10.234: INFO: Found 1 / 1 Feb 20 12:00:10.234: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 20 12:00:10.240: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:10.240: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 20 12:00:10.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-6cx27 --namespace=e2e-tests-kubectl-qnvm6 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 20 12:00:10.526: INFO: stderr: "" Feb 20 12:00:10.526: INFO: stdout: "pod/redis-master-6cx27 patched\n" STEP: checking annotations Feb 20 12:00:10.540: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:00:10.540: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:00:10.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qnvm6" for this suite. Feb 20 12:00:48.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:00:48.793: INFO: namespace: e2e-tests-kubectl-qnvm6, resource: bindings, ignored listing per whitelist Feb 20 12:00:48.800: INFO: namespace e2e-tests-kubectl-qnvm6 deletion completed in 38.251371186s • [SLOW TEST:57.628 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:00:48.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 12:00:49.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-q849x' Feb 20 12:00:49.257: INFO: stderr: "" Feb 20 12:00:49.257: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 20 12:01:04.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-q849x -o json' Feb 20 12:01:04.462: INFO: stderr: "" Feb 20 12:01:04.462: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-20T12:00:49Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-q849x\",\n \"resourceVersion\": \"22308239\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-q849x/pods/e2e-test-nginx-pod\",\n \"uid\": \"a27b9f1b-53d8-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-ln5mc\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-ln5mc\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-ln5mc\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-20T12:00:50Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-20T12:01:00Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-20T12:01:00Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-20T12:00:49Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://4d8280f9d90ad97528535efde0f11f6824d3283198fcf1481e870329075aaeab\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-20T12:00:58Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-20T12:00:50Z\"\n }\n}\n" STEP: replace the image in the pod Feb 20 12:01:04.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-q849x' Feb 20 12:01:04.787: INFO: stderr: "" Feb 20 12:01:04.787: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Feb 20 12:01:04.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-q849x' Feb 20 12:01:13.660: INFO: stderr: "" Feb 20 12:01:13.660: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:01:13.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q849x" for this suite. Feb 20 12:01:19.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:01:20.023: INFO: namespace: e2e-tests-kubectl-q849x, resource: bindings, ignored listing per whitelist Feb 20 12:01:20.109: INFO: namespace e2e-tests-kubectl-q849x deletion completed in 6.296312909s • [SLOW TEST:31.308 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:01:20.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 12:01:30.501: INFO: Waiting up to 5m0s for pod "client-envvars-bb0c0538-53d8-11ea-bcb7-0242ac110008" in namespace "e2e-tests-pods-4vsvl" to be "success or failure" Feb 20 12:01:30.701: INFO: Pod "client-envvars-bb0c0538-53d8-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 199.720076ms Feb 20 12:01:32.722: INFO: Pod "client-envvars-bb0c0538-53d8-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220263826s Feb 20 12:01:34.741: INFO: Pod "client-envvars-bb0c0538-53d8-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239791404s Feb 20 12:01:36.757: INFO: Pod "client-envvars-bb0c0538-53d8-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.255950184s Feb 20 12:01:38.971: INFO: Pod "client-envvars-bb0c0538-53d8-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.469644066s Feb 20 12:01:41.057: INFO: Pod "client-envvars-bb0c0538-53d8-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.555137918s STEP: Saw pod success Feb 20 12:01:41.057: INFO: Pod "client-envvars-bb0c0538-53d8-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:01:41.061: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-bb0c0538-53d8-11ea-bcb7-0242ac110008 container env3cont: STEP: delete the pod Feb 20 12:01:41.443: INFO: Waiting for pod client-envvars-bb0c0538-53d8-11ea-bcb7-0242ac110008 to disappear Feb 20 12:01:41.517: INFO: Pod client-envvars-bb0c0538-53d8-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:01:41.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-4vsvl" for this suite. Feb 20 12:02:35.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:02:35.882: INFO: namespace: e2e-tests-pods-4vsvl, resource: bindings, ignored listing per whitelist Feb 20 12:02:35.975: INFO: namespace e2e-tests-pods-4vsvl deletion completed in 54.443091459s • [SLOW TEST:75.867 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:02:35.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-e23d6f6c-53d8-11ea-bcb7-0242ac110008 Feb 20 12:02:36.212: INFO: Pod name my-hostname-basic-e23d6f6c-53d8-11ea-bcb7-0242ac110008: Found 0 pods out of 1 Feb 20 12:02:41.532: INFO: Pod name my-hostname-basic-e23d6f6c-53d8-11ea-bcb7-0242ac110008: Found 1 pods out of 1 Feb 20 12:02:41.532: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e23d6f6c-53d8-11ea-bcb7-0242ac110008" are running Feb 20 12:02:45.559: INFO: Pod "my-hostname-basic-e23d6f6c-53d8-11ea-bcb7-0242ac110008-xczlr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 12:02:36 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 12:02:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e23d6f6c-53d8-11ea-bcb7-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 12:02:36 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e23d6f6c-53d8-11ea-bcb7-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 12:02:36 +0000 UTC Reason: Message:}]) Feb 20 12:02:45.560: INFO: Trying to dial the pod Feb 20 12:02:50.721: INFO: Controller my-hostname-basic-e23d6f6c-53d8-11ea-bcb7-0242ac110008: Got expected result from replica 1 [my-hostname-basic-e23d6f6c-53d8-11ea-bcb7-0242ac110008-xczlr]: "my-hostname-basic-e23d6f6c-53d8-11ea-bcb7-0242ac110008-xczlr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:02:50.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-cbqs6" for this suite. Feb 20 12:02:58.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:02:58.847: INFO: namespace: e2e-tests-replication-controller-cbqs6, resource: bindings, ignored listing per whitelist Feb 20 12:02:58.936: INFO: namespace e2e-tests-replication-controller-cbqs6 deletion completed in 8.209337762s • [SLOW TEST:22.960 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:02:58.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Feb 20 12:03:10.564: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:03:35.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-prlgv" for this suite. Feb 20 12:03:41.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:03:41.990: INFO: namespace: e2e-tests-namespaces-prlgv, resource: bindings, ignored listing per whitelist Feb 20 12:03:42.049: INFO: namespace e2e-tests-namespaces-prlgv deletion completed in 6.269608084s STEP: Destroying namespace "e2e-tests-nsdeletetest-547qj" for this suite. Feb 20 12:03:42.053: INFO: Namespace e2e-tests-nsdeletetest-547qj was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-q4qb7" for this suite. Feb 20 12:03:48.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:03:48.209: INFO: namespace: e2e-tests-nsdeletetest-q4qb7, resource: bindings, ignored listing per whitelist Feb 20 12:03:48.470: INFO: namespace e2e-tests-nsdeletetest-q4qb7 deletion completed in 6.417035748s • [SLOW TEST:49.534 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:03:48.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 12:03:48.741: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d781282-53d9-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-4ql6w" to be "success or failure" Feb 20 12:03:48.761: INFO: Pod "downwardapi-volume-0d781282-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.758281ms Feb 20 12:03:50.776: INFO: Pod "downwardapi-volume-0d781282-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035442623s Feb 20 12:03:52.793: INFO: Pod "downwardapi-volume-0d781282-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052580013s Feb 20 12:03:54.801: INFO: Pod "downwardapi-volume-0d781282-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060267373s Feb 20 12:03:56.809: INFO: Pod "downwardapi-volume-0d781282-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068493956s Feb 20 12:03:59.022: INFO: Pod "downwardapi-volume-0d781282-53d9-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.280983913s STEP: Saw pod success Feb 20 12:03:59.022: INFO: Pod "downwardapi-volume-0d781282-53d9-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:03:59.033: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0d781282-53d9-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 12:03:59.347: INFO: Waiting for pod downwardapi-volume-0d781282-53d9-11ea-bcb7-0242ac110008 to disappear Feb 20 12:03:59.411: INFO: Pod downwardapi-volume-0d781282-53d9-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:03:59.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4ql6w" for this suite. Feb 20 12:04:05.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:04:05.562: INFO: namespace: e2e-tests-downward-api-4ql6w, resource: bindings, ignored listing per whitelist Feb 20 12:04:05.690: INFO: namespace e2e-tests-downward-api-4ql6w deletion completed in 6.272755065s • [SLOW TEST:17.220 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:04:05.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-17c37e70-53d9-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 12:04:06.093: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-17c63c33-53d9-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-8pxlq" to be "success or failure" Feb 20 12:04:06.115: INFO: Pod "pod-projected-configmaps-17c63c33-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.523465ms Feb 20 12:04:08.357: INFO: Pod "pod-projected-configmaps-17c63c33-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263325822s Feb 20 12:04:10.881: INFO: Pod "pod-projected-configmaps-17c63c33-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.788028361s Feb 20 12:04:12.897: INFO: Pod "pod-projected-configmaps-17c63c33-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.804018517s Feb 20 12:04:14.965: INFO: Pod "pod-projected-configmaps-17c63c33-53d9-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.872067707s STEP: Saw pod success Feb 20 12:04:14.966: INFO: Pod "pod-projected-configmaps-17c63c33-53d9-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:04:14.991: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-17c63c33-53d9-11ea-bcb7-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 20 12:04:15.156: INFO: Waiting for pod pod-projected-configmaps-17c63c33-53d9-11ea-bcb7-0242ac110008 to disappear Feb 20 12:04:15.164: INFO: Pod pod-projected-configmaps-17c63c33-53d9-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:04:15.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8pxlq" for this suite. Feb 20 12:04:21.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:04:21.345: INFO: namespace: e2e-tests-projected-8pxlq, resource: bindings, ignored listing per whitelist Feb 20 12:04:21.367: INFO: namespace e2e-tests-projected-8pxlq deletion completed in 6.195856615s • [SLOW TEST:15.677 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:04:21.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 20 12:04:21.542: INFO: Waiting up to 5m0s for pod "client-containers-20fd553e-53d9-11ea-bcb7-0242ac110008" in namespace "e2e-tests-containers-p72xn" to be "success or failure" Feb 20 12:04:21.582: INFO: Pod "client-containers-20fd553e-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 40.328608ms Feb 20 12:04:23.591: INFO: Pod "client-containers-20fd553e-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049569615s Feb 20 12:04:25.606: INFO: Pod "client-containers-20fd553e-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063968757s Feb 20 12:04:27.614: INFO: Pod "client-containers-20fd553e-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072474104s Feb 20 12:04:29.898: INFO: Pod "client-containers-20fd553e-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356503934s Feb 20 12:04:31.942: INFO: Pod "client-containers-20fd553e-53d9-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.400629215s STEP: Saw pod success Feb 20 12:04:31.942: INFO: Pod "client-containers-20fd553e-53d9-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:04:31.954: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-20fd553e-53d9-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 12:04:32.078: INFO: Waiting for pod client-containers-20fd553e-53d9-11ea-bcb7-0242ac110008 to disappear Feb 20 12:04:32.104: INFO: Pod client-containers-20fd553e-53d9-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:04:32.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-p72xn" for this suite. Feb 20 12:04:38.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:04:38.410: INFO: namespace: e2e-tests-containers-p72xn, resource: bindings, ignored listing per whitelist Feb 20 12:04:38.611: INFO: namespace e2e-tests-containers-p72xn deletion completed in 6.496795613s • [SLOW TEST:17.243 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:04:38.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-kc8dw/configmap-test-2b47ec18-53d9-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 12:04:38.756: INFO: Waiting up to 5m0s for pod "pod-configmaps-2b494e36-53d9-11ea-bcb7-0242ac110008" in namespace "e2e-tests-configmap-kc8dw" to be "success or failure" Feb 20 12:04:38.842: INFO: Pod "pod-configmaps-2b494e36-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 86.205781ms Feb 20 12:04:40.865: INFO: Pod "pod-configmaps-2b494e36-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108685294s Feb 20 12:04:42.891: INFO: Pod "pod-configmaps-2b494e36-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134720737s Feb 20 12:04:44.908: INFO: Pod "pod-configmaps-2b494e36-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151826055s Feb 20 12:04:46.932: INFO: Pod "pod-configmaps-2b494e36-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175846795s Feb 20 12:04:49.314: INFO: Pod "pod-configmaps-2b494e36-53d9-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.557626053s STEP: Saw pod success Feb 20 12:04:49.314: INFO: Pod "pod-configmaps-2b494e36-53d9-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:04:49.322: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2b494e36-53d9-11ea-bcb7-0242ac110008 container env-test: STEP: delete the pod Feb 20 12:04:49.444: INFO: Waiting for pod pod-configmaps-2b494e36-53d9-11ea-bcb7-0242ac110008 to disappear Feb 20 12:04:49.464: INFO: Pod pod-configmaps-2b494e36-53d9-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:04:49.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kc8dw" for this suite. Feb 20 12:04:57.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:04:57.559: INFO: namespace: e2e-tests-configmap-kc8dw, resource: bindings, ignored listing per whitelist Feb 20 12:04:57.656: INFO: namespace e2e-tests-configmap-kc8dw deletion completed in 8.179570412s • [SLOW TEST:19.045 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:04:57.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-t8bjp Feb 20 12:05:07.922: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-t8bjp STEP: checking the pod's current state and verifying that restartCount is present Feb 20 12:05:07.925: INFO: Initial restart count of pod liveness-http is 0 Feb 20 12:05:32.351: INFO: Restart count of pod e2e-tests-container-probe-t8bjp/liveness-http is now 1 (24.425909494s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:05:32.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-t8bjp" for this suite. Feb 20 12:05:38.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:05:38.741: INFO: namespace: e2e-tests-container-probe-t8bjp, resource: bindings, ignored listing per whitelist Feb 20 12:05:38.746: INFO: namespace e2e-tests-container-probe-t8bjp deletion completed in 6.291487366s • [SLOW TEST:41.090 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:05:38.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-4f1ce7fa-53d9-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 12:05:38.950: INFO: Waiting up to 5m0s for pod "pod-secrets-4f2909c0-53d9-11ea-bcb7-0242ac110008" in namespace "e2e-tests-secrets-bq5mf" to be "success or failure" Feb 20 12:05:38.964: INFO: Pod "pod-secrets-4f2909c0-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.701243ms Feb 20 12:05:41.268: INFO: Pod "pod-secrets-4f2909c0-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317203649s Feb 20 12:05:43.288: INFO: Pod "pod-secrets-4f2909c0-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337978356s Feb 20 12:05:45.305: INFO: Pod "pod-secrets-4f2909c0-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354517192s Feb 20 12:05:47.386: INFO: Pod "pod-secrets-4f2909c0-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.435134571s Feb 20 12:05:49.412: INFO: Pod "pod-secrets-4f2909c0-53d9-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.461759088s STEP: Saw pod success Feb 20 12:05:49.412: INFO: Pod "pod-secrets-4f2909c0-53d9-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:05:49.485: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4f2909c0-53d9-11ea-bcb7-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 20 12:05:49.564: INFO: Waiting for pod pod-secrets-4f2909c0-53d9-11ea-bcb7-0242ac110008 to disappear Feb 20 12:05:49.569: INFO: Pod pod-secrets-4f2909c0-53d9-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:05:49.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bq5mf" for this suite. Feb 20 12:05:55.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:05:55.779: INFO: namespace: e2e-tests-secrets-bq5mf, resource: bindings, ignored listing per whitelist Feb 20 12:05:55.791: INFO: namespace e2e-tests-secrets-bq5mf deletion completed in 6.213950141s • [SLOW TEST:17.044 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:05:55.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-xm2gv [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Feb 20 12:05:56.124: INFO: Found 0 stateful pods, waiting for 3 Feb 20 12:06:06.429: INFO: Found 2 stateful pods, waiting for 3 Feb 20 12:06:16.140: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 12:06:16.140: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 12:06:16.140: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 20 12:06:26.141: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 12:06:26.141: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 12:06:26.141: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 20 12:06:26.199: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 20 12:06:36.373: INFO: Updating stateful set ss2 Feb 20 12:06:36.384: INFO: Waiting for Pod e2e-tests-statefulset-xm2gv/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Feb 20 12:06:46.782: INFO: Found 2 stateful pods, waiting for 3 Feb 20 12:06:57.334: INFO: Found 2 stateful pods, waiting for 3 Feb 20 12:07:06.805: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 12:07:06.805: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 12:07:06.805: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 20 12:07:16.802: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 20 12:07:16.802: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 20 12:07:16.802: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 20 12:07:16.921: INFO: Updating stateful set ss2 Feb 20 12:07:17.079: INFO: Waiting for Pod e2e-tests-statefulset-xm2gv/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 12:07:27.096: INFO: Waiting for Pod e2e-tests-statefulset-xm2gv/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 12:07:37.124: INFO: Updating stateful set ss2 Feb 20 12:07:37.221: INFO: Waiting for StatefulSet e2e-tests-statefulset-xm2gv/ss2 to complete update Feb 20 12:07:37.221: INFO: Waiting for Pod e2e-tests-statefulset-xm2gv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 12:07:47.248: INFO: Waiting for StatefulSet e2e-tests-statefulset-xm2gv/ss2 to complete update Feb 20 12:07:47.248: INFO: Waiting for Pod e2e-tests-statefulset-xm2gv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 20 12:07:57.245: INFO: Waiting for StatefulSet e2e-tests-statefulset-xm2gv/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 20 12:08:07.299: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xm2gv Feb 20 12:08:07.306: INFO: Scaling statefulset ss2 to 0 Feb 20 12:08:37.401: INFO: Waiting for statefulset status.replicas updated to 0 Feb 20 12:08:37.414: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:08:37.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-xm2gv" for this suite. Feb 20 12:08:45.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:08:45.760: INFO: namespace: e2e-tests-statefulset-xm2gv, resource: bindings, ignored listing per whitelist Feb 20 12:08:45.847: INFO: namespace e2e-tests-statefulset-xm2gv deletion completed in 8.325851163s • [SLOW TEST:170.056 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:08:45.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-beb140fe-53d9-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 12:08:46.074: INFO: Waiting up to 5m0s for pod "pod-secrets-beb2d62d-53d9-11ea-bcb7-0242ac110008" in namespace "e2e-tests-secrets-mts9r" to be "success or failure" Feb 20 12:08:46.136: INFO: Pod "pod-secrets-beb2d62d-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 61.731134ms Feb 20 12:08:48.168: INFO: Pod "pod-secrets-beb2d62d-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09435977s Feb 20 12:08:50.178: INFO: Pod "pod-secrets-beb2d62d-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104186401s Feb 20 12:08:52.412: INFO: Pod "pod-secrets-beb2d62d-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338012029s Feb 20 12:08:54.457: INFO: Pod "pod-secrets-beb2d62d-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.383144854s Feb 20 12:08:56.474: INFO: Pod "pod-secrets-beb2d62d-53d9-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.40051906s STEP: Saw pod success Feb 20 12:08:56.475: INFO: Pod "pod-secrets-beb2d62d-53d9-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:08:56.480: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-beb2d62d-53d9-11ea-bcb7-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 20 12:08:56.682: INFO: Waiting for pod pod-secrets-beb2d62d-53d9-11ea-bcb7-0242ac110008 to disappear Feb 20 12:08:56.699: INFO: Pod pod-secrets-beb2d62d-53d9-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:08:56.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mts9r" for this suite. Feb 20 12:09:02.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:09:02.833: INFO: namespace: e2e-tests-secrets-mts9r, resource: bindings, ignored listing per whitelist Feb 20 12:09:03.056: INFO: namespace e2e-tests-secrets-mts9r deletion completed in 6.341035906s • [SLOW TEST:17.209 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:09:03.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-hk8x5 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 20 12:09:03.230: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 20 12:09:47.550: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-hk8x5 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 12:09:47.550: INFO: >>> kubeConfig: /root/.kube/config I0220 12:09:47.665278 8 log.go:172] (0xc000bf84d0) (0xc001745860) Create stream I0220 12:09:47.665326 8 log.go:172] (0xc000bf84d0) (0xc001745860) Stream added, broadcasting: 1 I0220 12:09:47.671688 8 log.go:172] (0xc000bf84d0) Reply frame received for 1 I0220 12:09:47.671753 8 log.go:172] (0xc000bf84d0) (0xc001745900) Create stream I0220 12:09:47.671804 8 log.go:172] (0xc000bf84d0) (0xc001745900) Stream added, broadcasting: 3 I0220 12:09:47.673558 8 log.go:172] (0xc000bf84d0) Reply frame received for 3 I0220 12:09:47.673598 8 log.go:172] (0xc000bf84d0) (0xc001aaf4a0) Create stream I0220 12:09:47.673613 8 log.go:172] (0xc000bf84d0) (0xc001aaf4a0) Stream added, broadcasting: 5 I0220 12:09:47.675247 8 log.go:172] (0xc000bf84d0) Reply frame received for 5 I0220 12:09:48.022970 8 log.go:172] (0xc000bf84d0) Data frame received for 3 I0220 12:09:48.023014 8 log.go:172] (0xc001745900) (3) Data frame handling I0220 12:09:48.023039 8 log.go:172] (0xc001745900) (3) Data frame sent I0220 12:09:48.169695 8 log.go:172] (0xc000bf84d0) Data frame received for 1 I0220 12:09:48.169755 8 log.go:172] (0xc000bf84d0) (0xc001aaf4a0) Stream removed, broadcasting: 5 I0220 12:09:48.169808 8 log.go:172] (0xc001745860) (1) Data frame handling I0220 12:09:48.169818 8 log.go:172] (0xc001745860) (1) Data frame sent I0220 12:09:48.169862 8 log.go:172] (0xc000bf84d0) (0xc001745900) Stream removed, broadcasting: 3 I0220 12:09:48.169909 8 log.go:172] (0xc000bf84d0) (0xc001745860) Stream removed, broadcasting: 1 I0220 12:09:48.169934 8 log.go:172] (0xc000bf84d0) Go away received I0220 12:09:48.170085 8 log.go:172] (0xc000bf84d0) (0xc001745860) Stream removed, broadcasting: 1 I0220 12:09:48.170111 8 log.go:172] (0xc000bf84d0) (0xc001745900) Stream removed, broadcasting: 3 I0220 12:09:48.170123 8 log.go:172] (0xc000bf84d0) (0xc001aaf4a0) Stream removed, broadcasting: 5 Feb 20 12:09:48.170: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:09:48.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-hk8x5" for this suite. Feb 20 12:10:12.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:10:12.287: INFO: namespace: e2e-tests-pod-network-test-hk8x5, resource: bindings, ignored listing per whitelist Feb 20 12:10:12.383: INFO: namespace e2e-tests-pod-network-test-hk8x5 deletion completed in 24.190155328s • [SLOW TEST:69.327 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:10:12.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 20 12:10:12.717: INFO: Waiting up to 5m0s for pod "pod-f24a829f-53d9-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-p4t8z" to be "success or failure" Feb 20 12:10:12.742: INFO: Pod "pod-f24a829f-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 24.809734ms Feb 20 12:10:14.763: INFO: Pod "pod-f24a829f-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045447062s Feb 20 12:10:16.771: INFO: Pod "pod-f24a829f-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053232502s Feb 20 12:10:18.786: INFO: Pod "pod-f24a829f-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068514552s Feb 20 12:10:20.863: INFO: Pod "pod-f24a829f-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145034336s Feb 20 12:10:22.890: INFO: Pod "pod-f24a829f-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.172332335s Feb 20 12:10:24.902: INFO: Pod "pod-f24a829f-53d9-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.184049385s STEP: Saw pod success Feb 20 12:10:24.902: INFO: Pod "pod-f24a829f-53d9-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:10:24.905: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f24a829f-53d9-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 12:10:25.455: INFO: Waiting for pod pod-f24a829f-53d9-11ea-bcb7-0242ac110008 to disappear Feb 20 12:10:25.792: INFO: Pod pod-f24a829f-53d9-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:10:25.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-p4t8z" for this suite. Feb 20 12:10:32.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:10:32.161: INFO: namespace: e2e-tests-emptydir-p4t8z, resource: bindings, ignored listing per whitelist Feb 20 12:10:32.244: INFO: namespace e2e-tests-emptydir-p4t8z deletion completed in 6.425889303s • [SLOW TEST:19.860 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:10:32.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-fe14c386-53d9-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 12:10:32.427: INFO: Waiting up to 5m0s for pod "pod-secrets-fe157fa8-53d9-11ea-bcb7-0242ac110008" in namespace "e2e-tests-secrets-r8hzt" to be "success or failure" Feb 20 12:10:32.431: INFO: Pod "pod-secrets-fe157fa8-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247535ms Feb 20 12:10:34.467: INFO: Pod "pod-secrets-fe157fa8-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03990279s Feb 20 12:10:36.486: INFO: Pod "pod-secrets-fe157fa8-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058911804s Feb 20 12:10:38.506: INFO: Pod "pod-secrets-fe157fa8-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078633306s Feb 20 12:10:40.531: INFO: Pod "pod-secrets-fe157fa8-53d9-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104395087s Feb 20 12:10:42.571: INFO: Pod "pod-secrets-fe157fa8-53d9-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.143933103s STEP: Saw pod success Feb 20 12:10:42.571: INFO: Pod "pod-secrets-fe157fa8-53d9-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:10:42.594: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fe157fa8-53d9-11ea-bcb7-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 20 12:10:43.622: INFO: Waiting for pod pod-secrets-fe157fa8-53d9-11ea-bcb7-0242ac110008 to disappear Feb 20 12:10:43.874: INFO: Pod pod-secrets-fe157fa8-53d9-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:10:43.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-r8hzt" for this suite. Feb 20 12:10:50.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:10:50.218: INFO: namespace: e2e-tests-secrets-r8hzt, resource: bindings, ignored listing per whitelist Feb 20 12:10:50.239: INFO: namespace e2e-tests-secrets-r8hzt deletion completed in 6.338269389s • [SLOW TEST:17.994 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:10:50.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 20 12:10:50.465: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 20 12:10:50.521: INFO: Waiting for terminating namespaces to be deleted... Feb 20 12:10:50.533: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 20 12:10:50.556: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 20 12:10:50.556: INFO: Container coredns ready: true, restart count 0 Feb 20 12:10:50.556: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 20 12:10:50.556: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 20 12:10:50.556: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 20 12:10:50.556: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 20 12:10:50.556: INFO: Container coredns ready: true, restart count 0 Feb 20 12:10:50.556: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 20 12:10:50.556: INFO: Container kube-proxy ready: true, restart count 0 Feb 20 12:10:50.556: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 20 12:10:50.556: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 20 12:10:50.556: INFO: Container weave ready: true, restart count 0 Feb 20 12:10:50.556: INFO: Container weave-npc ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-102dcaef-53da-11ea-bcb7-0242ac110008 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-102dcaef-53da-11ea-bcb7-0242ac110008 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-102dcaef-53da-11ea-bcb7-0242ac110008 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:11:12.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-n4jbn" for this suite. Feb 20 12:11:27.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:11:27.173: INFO: namespace: e2e-tests-sched-pred-n4jbn, resource: bindings, ignored listing per whitelist Feb 20 12:11:27.210: INFO: namespace e2e-tests-sched-pred-n4jbn deletion completed in 14.213234652s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:36.971 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:11:27.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Feb 20 12:11:27.572: INFO: Waiting up to 5m0s for pod "var-expansion-1eee6159-53da-11ea-bcb7-0242ac110008" in namespace "e2e-tests-var-expansion-8qq8p" to be "success or failure" Feb 20 12:11:27.634: INFO: Pod "var-expansion-1eee6159-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 62.002842ms Feb 20 12:11:29.715: INFO: Pod "var-expansion-1eee6159-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143517355s Feb 20 12:11:31.730: INFO: Pod "var-expansion-1eee6159-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158690571s Feb 20 12:11:33.966: INFO: Pod "var-expansion-1eee6159-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394467942s Feb 20 12:11:35.996: INFO: Pod "var-expansion-1eee6159-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424391562s Feb 20 12:11:38.007: INFO: Pod "var-expansion-1eee6159-53da-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.435343607s STEP: Saw pod success Feb 20 12:11:38.007: INFO: Pod "var-expansion-1eee6159-53da-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:11:38.010: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-1eee6159-53da-11ea-bcb7-0242ac110008 container dapi-container: STEP: delete the pod Feb 20 12:11:38.168: INFO: Waiting for pod var-expansion-1eee6159-53da-11ea-bcb7-0242ac110008 to disappear Feb 20 12:11:38.269: INFO: Pod var-expansion-1eee6159-53da-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:11:38.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-8qq8p" for this suite. Feb 20 12:11:44.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:11:44.654: INFO: namespace: e2e-tests-var-expansion-8qq8p, resource: bindings, ignored listing per whitelist Feb 20 12:11:44.729: INFO: namespace e2e-tests-var-expansion-8qq8p deletion completed in 6.428951172s • [SLOW TEST:17.517 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:11:44.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 20 12:11:44.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fh978' Feb 20 12:11:47.192: INFO: stderr: "" Feb 20 12:11:47.192: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 20 12:11:47.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fh978' Feb 20 12:11:47.381: INFO: stderr: "" Feb 20 12:11:47.381: INFO: stdout: "update-demo-nautilus-9x9bh update-demo-nautilus-s4tf4 " Feb 20 12:11:47.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x9bh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh978' Feb 20 12:11:47.525: INFO: stderr: "" Feb 20 12:11:47.525: INFO: stdout: "" Feb 20 12:11:47.525: INFO: update-demo-nautilus-9x9bh is created but not running Feb 20 12:11:52.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fh978' Feb 20 12:11:52.674: INFO: stderr: "" Feb 20 12:11:52.674: INFO: stdout: "update-demo-nautilus-9x9bh update-demo-nautilus-s4tf4 " Feb 20 12:11:52.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x9bh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh978' Feb 20 12:11:52.756: INFO: stderr: "" Feb 20 12:11:52.756: INFO: stdout: "" Feb 20 12:11:52.756: INFO: update-demo-nautilus-9x9bh is created but not running Feb 20 12:11:57.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fh978' Feb 20 12:11:57.907: INFO: stderr: "" Feb 20 12:11:57.907: INFO: stdout: "update-demo-nautilus-9x9bh update-demo-nautilus-s4tf4 " Feb 20 12:11:57.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x9bh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh978' Feb 20 12:11:58.032: INFO: stderr: "" Feb 20 12:11:58.032: INFO: stdout: "" Feb 20 12:11:58.033: INFO: update-demo-nautilus-9x9bh is created but not running Feb 20 12:12:03.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fh978' Feb 20 12:12:03.231: INFO: stderr: "" Feb 20 12:12:03.231: INFO: stdout: "update-demo-nautilus-9x9bh update-demo-nautilus-s4tf4 " Feb 20 12:12:03.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x9bh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh978' Feb 20 12:12:03.378: INFO: stderr: "" Feb 20 12:12:03.378: INFO: stdout: "true" Feb 20 12:12:03.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x9bh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh978' Feb 20 12:12:03.515: INFO: stderr: "" Feb 20 12:12:03.515: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 12:12:03.515: INFO: validating pod update-demo-nautilus-9x9bh Feb 20 12:12:03.544: INFO: got data: { "image": "nautilus.jpg" } Feb 20 12:12:03.544: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 12:12:03.544: INFO: update-demo-nautilus-9x9bh is verified up and running Feb 20 12:12:03.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s4tf4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh978' Feb 20 12:12:03.674: INFO: stderr: "" Feb 20 12:12:03.674: INFO: stdout: "true" Feb 20 12:12:03.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s4tf4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fh978' Feb 20 12:12:03.758: INFO: stderr: "" Feb 20 12:12:03.758: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 20 12:12:03.758: INFO: validating pod update-demo-nautilus-s4tf4 Feb 20 12:12:03.772: INFO: got data: { "image": "nautilus.jpg" } Feb 20 12:12:03.772: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 20 12:12:03.772: INFO: update-demo-nautilus-s4tf4 is verified up and running STEP: using delete to clean up resources Feb 20 12:12:03.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fh978' Feb 20 12:12:03.953: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 12:12:03.953: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 20 12:12:03.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-fh978' Feb 20 12:12:04.367: INFO: stderr: "No resources found.\n" Feb 20 12:12:04.367: INFO: stdout: "" Feb 20 12:12:04.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-fh978 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 20 12:12:04.559: INFO: stderr: "" Feb 20 12:12:04.559: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:12:04.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fh978" for this suite. Feb 20 12:12:28.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:12:28.933: INFO: namespace: e2e-tests-kubectl-fh978, resource: bindings, ignored listing per whitelist Feb 20 12:12:28.958: INFO: namespace e2e-tests-kubectl-fh978 deletion completed in 24.380832827s • [SLOW TEST:44.228 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:12:28.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 20 12:12:29.053: INFO: Waiting up to 5m0s for pod "downward-api-439b5940-53da-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-4nfnh" to be "success or failure" Feb 20 12:12:29.159: INFO: Pod "downward-api-439b5940-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 105.863921ms Feb 20 12:12:31.428: INFO: Pod "downward-api-439b5940-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.374844035s Feb 20 12:12:33.448: INFO: Pod "downward-api-439b5940-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.395062698s Feb 20 12:12:35.727: INFO: Pod "downward-api-439b5940-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.673987707s Feb 20 12:12:37.765: INFO: Pod "downward-api-439b5940-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.712076751s Feb 20 12:12:39.780: INFO: Pod "downward-api-439b5940-53da-11ea-bcb7-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 10.727007616s Feb 20 12:12:41.804: INFO: Pod "downward-api-439b5940-53da-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.750783299s STEP: Saw pod success Feb 20 12:12:41.804: INFO: Pod "downward-api-439b5940-53da-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:12:41.837: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-439b5940-53da-11ea-bcb7-0242ac110008 container dapi-container: STEP: delete the pod Feb 20 12:12:42.042: INFO: Waiting for pod downward-api-439b5940-53da-11ea-bcb7-0242ac110008 to disappear Feb 20 12:12:42.053: INFO: Pod downward-api-439b5940-53da-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:12:42.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4nfnh" for this suite. Feb 20 12:12:48.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:12:48.411: INFO: namespace: e2e-tests-downward-api-4nfnh, resource: bindings, ignored listing per whitelist Feb 20 12:12:48.422: INFO: namespace e2e-tests-downward-api-4nfnh deletion completed in 6.356785982s • [SLOW TEST:19.464 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:12:48.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 20 12:12:48.843: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:13:05.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-2lcp6" for this suite. Feb 20 12:13:11.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:13:11.840: INFO: namespace: e2e-tests-init-container-2lcp6, resource: bindings, ignored listing per whitelist Feb 20 12:13:11.921: INFO: namespace e2e-tests-init-container-2lcp6 deletion completed in 6.182342582s • [SLOW TEST:23.499 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:13:11.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Feb 20 12:13:12.143: INFO: Waiting up to 5m0s for pod "client-containers-5d435e32-53da-11ea-bcb7-0242ac110008" in namespace "e2e-tests-containers-jjjfp" to be "success or failure" Feb 20 12:13:12.178: INFO: Pod "client-containers-5d435e32-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 34.625034ms Feb 20 12:13:14.192: INFO: Pod "client-containers-5d435e32-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048708189s Feb 20 12:13:16.215: INFO: Pod "client-containers-5d435e32-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071843325s Feb 20 12:13:18.229: INFO: Pod "client-containers-5d435e32-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085768649s Feb 20 12:13:20.242: INFO: Pod "client-containers-5d435e32-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099266681s Feb 20 12:13:22.282: INFO: Pod "client-containers-5d435e32-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.138532775s Feb 20 12:13:24.307: INFO: Pod "client-containers-5d435e32-53da-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.163475781s STEP: Saw pod success Feb 20 12:13:24.307: INFO: Pod "client-containers-5d435e32-53da-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:13:24.314: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-5d435e32-53da-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 12:13:24.598: INFO: Waiting for pod client-containers-5d435e32-53da-11ea-bcb7-0242ac110008 to disappear Feb 20 12:13:24.627: INFO: Pod client-containers-5d435e32-53da-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:13:24.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-jjjfp" for this suite. Feb 20 12:13:30.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:13:31.764: INFO: namespace: e2e-tests-containers-jjjfp, resource: bindings, ignored listing per whitelist Feb 20 12:13:31.908: INFO: namespace e2e-tests-containers-jjjfp deletion completed in 7.154309314s • [SLOW TEST:19.986 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:13:31.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-6925f382-53da-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 12:13:32.066: INFO: Waiting up to 5m0s for pod "pod-configmaps-6926b06f-53da-11ea-bcb7-0242ac110008" in namespace "e2e-tests-configmap-t5txh" to be "success or failure" Feb 20 12:13:32.094: INFO: Pod "pod-configmaps-6926b06f-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 27.561868ms Feb 20 12:13:34.108: INFO: Pod "pod-configmaps-6926b06f-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041685945s Feb 20 12:13:36.122: INFO: Pod "pod-configmaps-6926b06f-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055261275s Feb 20 12:13:38.368: INFO: Pod "pod-configmaps-6926b06f-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.302019684s Feb 20 12:13:40.560: INFO: Pod "pod-configmaps-6926b06f-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.493615s Feb 20 12:13:42.667: INFO: Pod "pod-configmaps-6926b06f-53da-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.600850448s STEP: Saw pod success Feb 20 12:13:42.667: INFO: Pod "pod-configmaps-6926b06f-53da-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:13:42.711: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6926b06f-53da-11ea-bcb7-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 20 12:13:42.856: INFO: Waiting for pod pod-configmaps-6926b06f-53da-11ea-bcb7-0242ac110008 to disappear Feb 20 12:13:42.863: INFO: Pod pod-configmaps-6926b06f-53da-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:13:42.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-t5txh" for this suite. Feb 20 12:13:48.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:13:49.022: INFO: namespace: e2e-tests-configmap-t5txh, resource: bindings, ignored listing per whitelist Feb 20 12:13:49.079: INFO: namespace e2e-tests-configmap-t5txh deletion completed in 6.20660003s • [SLOW TEST:17.171 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:13:49.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 12:13:49.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-736f737b-53da-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-frlmj" to be "success or failure" Feb 20 12:13:49.327: INFO: Pod "downwardapi-volume-736f737b-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 28.462733ms Feb 20 12:13:51.349: INFO: Pod "downwardapi-volume-736f737b-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04974698s Feb 20 12:13:53.373: INFO: Pod "downwardapi-volume-736f737b-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07354398s Feb 20 12:13:55.588: INFO: Pod "downwardapi-volume-736f737b-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288699404s Feb 20 12:13:57.603: INFO: Pod "downwardapi-volume-736f737b-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.303767854s Feb 20 12:13:59.616: INFO: Pod "downwardapi-volume-736f737b-53da-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.317335395s STEP: Saw pod success Feb 20 12:13:59.616: INFO: Pod "downwardapi-volume-736f737b-53da-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:13:59.620: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-736f737b-53da-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 12:13:59.735: INFO: Waiting for pod downwardapi-volume-736f737b-53da-11ea-bcb7-0242ac110008 to disappear Feb 20 12:13:59.743: INFO: Pod downwardapi-volume-736f737b-53da-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:13:59.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-frlmj" for this suite. Feb 20 12:14:05.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:14:05.929: INFO: namespace: e2e-tests-downward-api-frlmj, resource: bindings, ignored listing per whitelist Feb 20 12:14:05.991: INFO: namespace e2e-tests-downward-api-frlmj deletion completed in 6.24075137s • [SLOW TEST:16.912 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:14:05.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 12:14:06.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-bn698' Feb 20 12:14:06.361: INFO: stderr: "" Feb 20 12:14:06.361: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Feb 20 12:14:06.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bn698' Feb 20 12:14:12.632: INFO: stderr: "" Feb 20 12:14:12.632: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:14:12.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bn698" for this suite. Feb 20 12:14:18.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:14:18.912: INFO: namespace: e2e-tests-kubectl-bn698, resource: bindings, ignored listing per whitelist Feb 20 12:14:18.954: INFO: namespace e2e-tests-kubectl-bn698 deletion completed in 6.234817806s • [SLOW TEST:12.962 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:14:18.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 12:14:19.177: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:14:29.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-5lmdc" for this suite. Feb 20 12:15:11.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:15:11.502: INFO: namespace: e2e-tests-pods-5lmdc, resource: bindings, ignored listing per whitelist Feb 20 12:15:11.607: INFO: namespace e2e-tests-pods-5lmdc deletion completed in 42.219450904s • [SLOW TEST:52.653 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:15:11.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 20 12:15:35.983: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5svgw PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 12:15:35.983: INFO: >>> kubeConfig: /root/.kube/config I0220 12:15:36.081840 8 log.go:172] (0xc000bf84d0) (0xc0009ce960) Create stream I0220 12:15:36.081904 8 log.go:172] (0xc000bf84d0) (0xc0009ce960) Stream added, broadcasting: 1 I0220 12:15:36.090009 8 log.go:172] (0xc000bf84d0) Reply frame received for 1 I0220 12:15:36.090069 8 log.go:172] (0xc000bf84d0) (0xc001b8bc20) Create stream I0220 12:15:36.090081 8 log.go:172] (0xc000bf84d0) (0xc001b8bc20) Stream added, broadcasting: 3 I0220 12:15:36.091694 8 log.go:172] (0xc000bf84d0) Reply frame received for 3 I0220 12:15:36.091718 8 log.go:172] (0xc000bf84d0) (0xc0009cea00) Create stream I0220 12:15:36.091727 8 log.go:172] (0xc000bf84d0) (0xc0009cea00) Stream added, broadcasting: 5 I0220 12:15:36.092830 8 log.go:172] (0xc000bf84d0) Reply frame received for 5 I0220 12:15:36.225989 8 log.go:172] (0xc000bf84d0) Data frame received for 3 I0220 12:15:36.226130 8 log.go:172] (0xc001b8bc20) (3) Data frame handling I0220 12:15:36.226147 8 log.go:172] (0xc001b8bc20) (3) Data frame sent I0220 12:15:36.369685 8 log.go:172] (0xc000bf84d0) (0xc001b8bc20) Stream removed, broadcasting: 3 I0220 12:15:36.369883 8 log.go:172] (0xc000bf84d0) Data frame received for 1 I0220 12:15:36.369928 8 log.go:172] (0xc000bf84d0) (0xc0009cea00) Stream removed, broadcasting: 5 I0220 12:15:36.370014 8 log.go:172] (0xc0009ce960) (1) Data frame handling I0220 12:15:36.370087 8 log.go:172] (0xc0009ce960) (1) Data frame sent I0220 12:15:36.370114 8 log.go:172] (0xc000bf84d0) (0xc0009ce960) Stream removed, broadcasting: 1 I0220 12:15:36.370209 8 log.go:172] (0xc000bf84d0) Go away received I0220 12:15:36.370353 8 log.go:172] (0xc000bf84d0) (0xc0009ce960) Stream removed, broadcasting: 1 I0220 12:15:36.370370 8 log.go:172] (0xc000bf84d0) (0xc001b8bc20) Stream removed, broadcasting: 3 I0220 12:15:36.370393 8 log.go:172] (0xc000bf84d0) (0xc0009cea00) Stream removed, broadcasting: 5 Feb 20 12:15:36.370: INFO: Exec stderr: "" Feb 20 12:15:36.370: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5svgw PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 12:15:36.370: INFO: >>> kubeConfig: /root/.kube/config I0220 12:15:36.463548 8 log.go:172] (0xc000ae62c0) (0xc001f82d20) Create stream I0220 12:15:36.463585 8 log.go:172] (0xc000ae62c0) (0xc001f82d20) Stream added, broadcasting: 1 I0220 12:15:36.470826 8 log.go:172] (0xc000ae62c0) Reply frame received for 1 I0220 12:15:36.470865 8 log.go:172] (0xc000ae62c0) (0xc0009ceaa0) Create stream I0220 12:15:36.470873 8 log.go:172] (0xc000ae62c0) (0xc0009ceaa0) Stream added, broadcasting: 3 I0220 12:15:36.473662 8 log.go:172] (0xc000ae62c0) Reply frame received for 3 I0220 12:15:36.473682 8 log.go:172] (0xc000ae62c0) (0xc0009ceb40) Create stream I0220 12:15:36.473689 8 log.go:172] (0xc000ae62c0) (0xc0009ceb40) Stream added, broadcasting: 5 I0220 12:15:36.477264 8 log.go:172] (0xc000ae62c0) Reply frame received for 5 I0220 12:15:36.831172 8 log.go:172] (0xc000ae62c0) Data frame received for 3 I0220 12:15:36.831209 8 log.go:172] (0xc0009ceaa0) (3) Data frame handling I0220 12:15:36.831225 8 log.go:172] (0xc0009ceaa0) (3) Data frame sent I0220 12:15:37.001865 8 log.go:172] (0xc000ae62c0) (0xc0009ceaa0) Stream removed, broadcasting: 3 I0220 12:15:37.002161 8 log.go:172] (0xc000ae62c0) Data frame received for 1 I0220 12:15:37.002246 8 log.go:172] (0xc001f82d20) (1) Data frame handling I0220 12:15:37.002264 8 log.go:172] (0xc001f82d20) (1) Data frame sent I0220 12:15:37.002294 8 log.go:172] (0xc000ae62c0) (0xc0009ceb40) Stream removed, broadcasting: 5 I0220 12:15:37.002383 8 log.go:172] (0xc000ae62c0) (0xc001f82d20) Stream removed, broadcasting: 1 I0220 12:15:37.002399 8 log.go:172] (0xc000ae62c0) Go away received I0220 12:15:37.002654 8 log.go:172] (0xc000ae62c0) (0xc001f82d20) Stream removed, broadcasting: 1 I0220 12:15:37.002752 8 log.go:172] (0xc000ae62c0) (0xc0009ceaa0) Stream removed, broadcasting: 3 I0220 12:15:37.002762 8 log.go:172] (0xc000ae62c0) (0xc0009ceb40) Stream removed, broadcasting: 5 Feb 20 12:15:37.002: INFO: Exec stderr: "" Feb 20 12:15:37.002: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5svgw PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 12:15:37.002: INFO: >>> kubeConfig: /root/.kube/config I0220 12:15:37.103620 8 log.go:172] (0xc000ae6790) (0xc001f82fa0) Create stream I0220 12:15:37.103736 8 log.go:172] (0xc000ae6790) (0xc001f82fa0) Stream added, broadcasting: 1 I0220 12:15:37.111560 8 log.go:172] (0xc000ae6790) Reply frame received for 1 I0220 12:15:37.111615 8 log.go:172] (0xc000ae6790) (0xc001b8bd60) Create stream I0220 12:15:37.111638 8 log.go:172] (0xc000ae6790) (0xc001b8bd60) Stream added, broadcasting: 3 I0220 12:15:37.112789 8 log.go:172] (0xc000ae6790) Reply frame received for 3 I0220 12:15:37.112833 8 log.go:172] (0xc000ae6790) (0xc001b8be00) Create stream I0220 12:15:37.112843 8 log.go:172] (0xc000ae6790) (0xc001b8be00) Stream added, broadcasting: 5 I0220 12:15:37.114318 8 log.go:172] (0xc000ae6790) Reply frame received for 5 I0220 12:15:37.229380 8 log.go:172] (0xc000ae6790) Data frame received for 3 I0220 12:15:37.229515 8 log.go:172] (0xc001b8bd60) (3) Data frame handling I0220 12:15:37.229551 8 log.go:172] (0xc001b8bd60) (3) Data frame sent I0220 12:15:37.380259 8 log.go:172] (0xc000ae6790) (0xc001b8bd60) Stream removed, broadcasting: 3 I0220 12:15:37.380355 8 log.go:172] (0xc000ae6790) Data frame received for 1 I0220 12:15:37.380408 8 log.go:172] (0xc000ae6790) (0xc001b8be00) Stream removed, broadcasting: 5 I0220 12:15:37.380454 8 log.go:172] (0xc001f82fa0) (1) Data frame handling I0220 12:15:37.380467 8 log.go:172] (0xc001f82fa0) (1) Data frame sent I0220 12:15:37.380477 8 log.go:172] (0xc000ae6790) (0xc001f82fa0) Stream removed, broadcasting: 1 I0220 12:15:37.380494 8 log.go:172] (0xc000ae6790) Go away received I0220 12:15:37.380656 8 log.go:172] (0xc000ae6790) (0xc001f82fa0) Stream removed, broadcasting: 1 I0220 12:15:37.380701 8 log.go:172] (0xc000ae6790) (0xc001b8bd60) Stream removed, broadcasting: 3 I0220 12:15:37.380728 8 log.go:172] (0xc000ae6790) (0xc001b8be00) Stream removed, broadcasting: 5 Feb 20 12:15:37.380: INFO: Exec stderr: "" Feb 20 12:15:37.380: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5svgw PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 12:15:37.380: INFO: >>> kubeConfig: /root/.kube/config I0220 12:15:37.570503 8 log.go:172] (0xc000d9b290) (0xc001c100a0) Create stream I0220 12:15:37.570602 8 log.go:172] (0xc000d9b290) (0xc001c100a0) Stream added, broadcasting: 1 I0220 12:15:37.574674 8 log.go:172] (0xc000d9b290) Reply frame received for 1 I0220 12:15:37.574704 8 log.go:172] (0xc000d9b290) (0xc00175c320) Create stream I0220 12:15:37.574713 8 log.go:172] (0xc000d9b290) (0xc00175c320) Stream added, broadcasting: 3 I0220 12:15:37.576695 8 log.go:172] (0xc000d9b290) Reply frame received for 3 I0220 12:15:37.576738 8 log.go:172] (0xc000d9b290) (0xc00175c3c0) Create stream I0220 12:15:37.576755 8 log.go:172] (0xc000d9b290) (0xc00175c3c0) Stream added, broadcasting: 5 I0220 12:15:37.578585 8 log.go:172] (0xc000d9b290) Reply frame received for 5 I0220 12:15:37.744894 8 log.go:172] (0xc000d9b290) Data frame received for 3 I0220 12:15:37.745035 8 log.go:172] (0xc00175c320) (3) Data frame handling I0220 12:15:37.745084 8 log.go:172] (0xc00175c320) (3) Data frame sent I0220 12:15:37.922834 8 log.go:172] (0xc000d9b290) (0xc00175c320) Stream removed, broadcasting: 3 I0220 12:15:37.922964 8 log.go:172] (0xc000d9b290) Data frame received for 1 I0220 12:15:37.923024 8 log.go:172] (0xc001c100a0) (1) Data frame handling I0220 12:15:37.923068 8 log.go:172] (0xc001c100a0) (1) Data frame sent I0220 12:15:37.923128 8 log.go:172] (0xc000d9b290) (0xc00175c3c0) Stream removed, broadcasting: 5 I0220 12:15:37.923193 8 log.go:172] (0xc000d9b290) (0xc001c100a0) Stream removed, broadcasting: 1 I0220 12:15:37.923244 8 log.go:172] (0xc000d9b290) Go away received I0220 12:15:37.923419 8 log.go:172] (0xc000d9b290) (0xc001c100a0) Stream removed, broadcasting: 1 I0220 12:15:37.923442 8 log.go:172] (0xc000d9b290) (0xc00175c320) Stream removed, broadcasting: 3 I0220 12:15:37.923455 8 log.go:172] (0xc000d9b290) (0xc00175c3c0) Stream removed, broadcasting: 5 Feb 20 12:15:37.923: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 20 12:15:37.923: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5svgw PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 12:15:37.923: INFO: >>> kubeConfig: /root/.kube/config I0220 12:15:38.011037 8 log.go:172] (0xc000ae6c60) (0xc001f83400) Create stream I0220 12:15:38.011116 8 log.go:172] (0xc000ae6c60) (0xc001f83400) Stream added, broadcasting: 1 I0220 12:15:38.019751 8 log.go:172] (0xc000ae6c60) Reply frame received for 1 I0220 12:15:38.019803 8 log.go:172] (0xc000ae6c60) (0xc00175c460) Create stream I0220 12:15:38.019814 8 log.go:172] (0xc000ae6c60) (0xc00175c460) Stream added, broadcasting: 3 I0220 12:15:38.021191 8 log.go:172] (0xc000ae6c60) Reply frame received for 3 I0220 12:15:38.021217 8 log.go:172] (0xc000ae6c60) (0xc001d6dae0) Create stream I0220 12:15:38.021226 8 log.go:172] (0xc000ae6c60) (0xc001d6dae0) Stream added, broadcasting: 5 I0220 12:15:38.022543 8 log.go:172] (0xc000ae6c60) Reply frame received for 5 I0220 12:15:38.147641 8 log.go:172] (0xc000ae6c60) Data frame received for 3 I0220 12:15:38.147721 8 log.go:172] (0xc00175c460) (3) Data frame handling I0220 12:15:38.147769 8 log.go:172] (0xc00175c460) (3) Data frame sent I0220 12:15:38.243130 8 log.go:172] (0xc000ae6c60) Data frame received for 1 I0220 12:15:38.243174 8 log.go:172] (0xc001f83400) (1) Data frame handling I0220 12:15:38.243209 8 log.go:172] (0xc001f83400) (1) Data frame sent I0220 12:15:38.243262 8 log.go:172] (0xc000ae6c60) (0xc001f83400) Stream removed, broadcasting: 1 I0220 12:15:38.243398 8 log.go:172] (0xc000ae6c60) (0xc00175c460) Stream removed, broadcasting: 3 I0220 12:15:38.243474 8 log.go:172] (0xc000ae6c60) (0xc001d6dae0) Stream removed, broadcasting: 5 I0220 12:15:38.243516 8 log.go:172] (0xc000ae6c60) Go away received I0220 12:15:38.243741 8 log.go:172] (0xc000ae6c60) (0xc001f83400) Stream removed, broadcasting: 1 I0220 12:15:38.243765 8 log.go:172] (0xc000ae6c60) (0xc00175c460) Stream removed, broadcasting: 3 I0220 12:15:38.243773 8 log.go:172] (0xc000ae6c60) (0xc001d6dae0) Stream removed, broadcasting: 5 Feb 20 12:15:38.243: INFO: Exec stderr: "" Feb 20 12:15:38.243: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5svgw PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 12:15:38.243: INFO: >>> kubeConfig: /root/.kube/config I0220 12:15:38.317368 8 log.go:172] (0xc00209c2c0) (0xc00175c6e0) Create stream I0220 12:15:38.317520 8 log.go:172] (0xc00209c2c0) (0xc00175c6e0) Stream added, broadcasting: 1 I0220 12:15:38.331610 8 log.go:172] (0xc00209c2c0) Reply frame received for 1 I0220 12:15:38.331761 8 log.go:172] (0xc00209c2c0) (0xc001c10140) Create stream I0220 12:15:38.331801 8 log.go:172] (0xc00209c2c0) (0xc001c10140) Stream added, broadcasting: 3 I0220 12:15:38.334179 8 log.go:172] (0xc00209c2c0) Reply frame received for 3 I0220 12:15:38.334298 8 log.go:172] (0xc00209c2c0) (0xc001c101e0) Create stream I0220 12:15:38.334335 8 log.go:172] (0xc00209c2c0) (0xc001c101e0) Stream added, broadcasting: 5 I0220 12:15:38.337010 8 log.go:172] (0xc00209c2c0) Reply frame received for 5 I0220 12:15:38.497567 8 log.go:172] (0xc00209c2c0) Data frame received for 3 I0220 12:15:38.497647 8 log.go:172] (0xc001c10140) (3) Data frame handling I0220 12:15:38.497659 8 log.go:172] (0xc001c10140) (3) Data frame sent I0220 12:15:38.637016 8 log.go:172] (0xc00209c2c0) Data frame received for 1 I0220 12:15:38.637225 8 log.go:172] (0xc00175c6e0) (1) Data frame handling I0220 12:15:38.637266 8 log.go:172] (0xc00209c2c0) (0xc001c101e0) Stream removed, broadcasting: 5 I0220 12:15:38.637385 8 log.go:172] (0xc00209c2c0) (0xc001c10140) Stream removed, broadcasting: 3 I0220 12:15:38.637447 8 log.go:172] (0xc00175c6e0) (1) Data frame sent I0220 12:15:38.637520 8 log.go:172] (0xc00209c2c0) (0xc00175c6e0) Stream removed, broadcasting: 1 I0220 12:15:38.637581 8 log.go:172] (0xc00209c2c0) Go away received I0220 12:15:38.637778 8 log.go:172] (0xc00209c2c0) (0xc00175c6e0) Stream removed, broadcasting: 1 I0220 12:15:38.637840 8 log.go:172] (0xc00209c2c0) (0xc001c10140) Stream removed, broadcasting: 3 I0220 12:15:38.637900 8 log.go:172] (0xc00209c2c0) (0xc001c101e0) Stream removed, broadcasting: 5 Feb 20 12:15:38.638: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 20 12:15:38.638: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5svgw PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 12:15:38.638: INFO: >>> kubeConfig: /root/.kube/config I0220 12:15:38.701236 8 log.go:172] (0xc000d9b760) (0xc001c103c0) Create stream I0220 12:15:38.701272 8 log.go:172] (0xc000d9b760) (0xc001c103c0) Stream added, broadcasting: 1 I0220 12:15:38.704894 8 log.go:172] (0xc000d9b760) Reply frame received for 1 I0220 12:15:38.704917 8 log.go:172] (0xc000d9b760) (0xc00175c8c0) Create stream I0220 12:15:38.704923 8 log.go:172] (0xc000d9b760) (0xc00175c8c0) Stream added, broadcasting: 3 I0220 12:15:38.706479 8 log.go:172] (0xc000d9b760) Reply frame received for 3 I0220 12:15:38.706513 8 log.go:172] (0xc000d9b760) (0xc001f834a0) Create stream I0220 12:15:38.706526 8 log.go:172] (0xc000d9b760) (0xc001f834a0) Stream added, broadcasting: 5 I0220 12:15:38.708610 8 log.go:172] (0xc000d9b760) Reply frame received for 5 I0220 12:15:38.817089 8 log.go:172] (0xc000d9b760) Data frame received for 3 I0220 12:15:38.817172 8 log.go:172] (0xc00175c8c0) (3) Data frame handling I0220 12:15:38.817188 8 log.go:172] (0xc00175c8c0) (3) Data frame sent I0220 12:15:38.928346 8 log.go:172] (0xc000d9b760) Data frame received for 1 I0220 12:15:38.928419 8 log.go:172] (0xc000d9b760) (0xc00175c8c0) Stream removed, broadcasting: 3 I0220 12:15:38.928476 8 log.go:172] (0xc001c103c0) (1) Data frame handling I0220 12:15:38.928506 8 log.go:172] (0xc001c103c0) (1) Data frame sent I0220 12:15:38.928526 8 log.go:172] (0xc000d9b760) (0xc001f834a0) Stream removed, broadcasting: 5 I0220 12:15:38.928586 8 log.go:172] (0xc000d9b760) (0xc001c103c0) Stream removed, broadcasting: 1 I0220 12:15:38.928615 8 log.go:172] (0xc000d9b760) Go away received I0220 12:15:38.928767 8 log.go:172] (0xc000d9b760) (0xc001c103c0) Stream removed, broadcasting: 1 I0220 12:15:38.928781 8 log.go:172] (0xc000d9b760) (0xc00175c8c0) Stream removed, broadcasting: 3 I0220 12:15:38.928789 8 log.go:172] (0xc000d9b760) (0xc001f834a0) Stream removed, broadcasting: 5 Feb 20 12:15:38.928: INFO: Exec stderr: "" Feb 20 12:15:38.928: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5svgw PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 12:15:38.928: INFO: >>> kubeConfig: /root/.kube/config I0220 12:15:39.010699 8 log.go:172] (0xc000d9bc30) (0xc001c106e0) Create stream I0220 12:15:39.010771 8 log.go:172] (0xc000d9bc30) (0xc001c106e0) Stream added, broadcasting: 1 I0220 12:15:39.015700 8 log.go:172] (0xc000d9bc30) Reply frame received for 1 I0220 12:15:39.015767 8 log.go:172] (0xc000d9bc30) (0xc00175c960) Create stream I0220 12:15:39.015795 8 log.go:172] (0xc000d9bc30) (0xc00175c960) Stream added, broadcasting: 3 I0220 12:15:39.017466 8 log.go:172] (0xc000d9bc30) Reply frame received for 3 I0220 12:15:39.017492 8 log.go:172] (0xc000d9bc30) (0xc001f83540) Create stream I0220 12:15:39.017505 8 log.go:172] (0xc000d9bc30) (0xc001f83540) Stream added, broadcasting: 5 I0220 12:15:39.018888 8 log.go:172] (0xc000d9bc30) Reply frame received for 5 I0220 12:15:39.163968 8 log.go:172] (0xc000d9bc30) Data frame received for 3 I0220 12:15:39.164048 8 log.go:172] (0xc00175c960) (3) Data frame handling I0220 12:15:39.164063 8 log.go:172] (0xc00175c960) (3) Data frame sent I0220 12:15:39.259771 8 log.go:172] (0xc000d9bc30) Data frame received for 1 I0220 12:15:39.259839 8 log.go:172] (0xc000d9bc30) (0xc00175c960) Stream removed, broadcasting: 3 I0220 12:15:39.259878 8 log.go:172] (0xc001c106e0) (1) Data frame handling I0220 12:15:39.259903 8 log.go:172] (0xc000d9bc30) (0xc001f83540) Stream removed, broadcasting: 5 I0220 12:15:39.259975 8 log.go:172] (0xc001c106e0) (1) Data frame sent I0220 12:15:39.259988 8 log.go:172] (0xc000d9bc30) (0xc001c106e0) Stream removed, broadcasting: 1 I0220 12:15:39.260007 8 log.go:172] (0xc000d9bc30) Go away received I0220 12:15:39.260310 8 log.go:172] (0xc000d9bc30) (0xc001c106e0) Stream removed, broadcasting: 1 I0220 12:15:39.260340 8 log.go:172] (0xc000d9bc30) (0xc00175c960) Stream removed, broadcasting: 3 I0220 12:15:39.260372 8 log.go:172] (0xc000d9bc30) (0xc001f83540) Stream removed, broadcasting: 5 Feb 20 12:15:39.260: INFO: Exec stderr: "" Feb 20 12:15:39.260: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5svgw PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 12:15:39.260: INFO: >>> kubeConfig: /root/.kube/config I0220 12:15:39.325995 8 log.go:172] (0xc000ae7130) (0xc001f837c0) Create stream I0220 12:15:39.326170 8 log.go:172] (0xc000ae7130) (0xc001f837c0) Stream added, broadcasting: 1 I0220 12:15:39.383861 8 log.go:172] (0xc000ae7130) Reply frame received for 1 I0220 12:15:39.384024 8 log.go:172] (0xc000ae7130) (0xc0009cec80) Create stream I0220 12:15:39.384069 8 log.go:172] (0xc000ae7130) (0xc0009cec80) Stream added, broadcasting: 3 I0220 12:15:39.388182 8 log.go:172] (0xc000ae7130) Reply frame received for 3 I0220 12:15:39.388203 8 log.go:172] (0xc000ae7130) (0xc0009ced20) Create stream I0220 12:15:39.388210 8 log.go:172] (0xc000ae7130) (0xc0009ced20) Stream added, broadcasting: 5 I0220 12:15:39.391072 8 log.go:172] (0xc000ae7130) Reply frame received for 5 I0220 12:15:39.574708 8 log.go:172] (0xc000ae7130) Data frame received for 3 I0220 12:15:39.574779 8 log.go:172] (0xc0009cec80) (3) Data frame handling I0220 12:15:39.574814 8 log.go:172] (0xc0009cec80) (3) Data frame sent I0220 12:15:39.682643 8 log.go:172] (0xc000ae7130) Data frame received for 1 I0220 12:15:39.682756 8 log.go:172] (0xc001f837c0) (1) Data frame handling I0220 12:15:39.682780 8 log.go:172] (0xc001f837c0) (1) Data frame sent I0220 12:15:39.682922 8 log.go:172] (0xc000ae7130) (0xc001f837c0) Stream removed, broadcasting: 1 I0220 12:15:39.683075 8 log.go:172] (0xc000ae7130) (0xc0009cec80) Stream removed, broadcasting: 3 I0220 12:15:39.683308 8 log.go:172] (0xc000ae7130) (0xc0009ced20) Stream removed, broadcasting: 5 I0220 12:15:39.683363 8 log.go:172] (0xc000ae7130) Go away received I0220 12:15:39.683836 8 log.go:172] (0xc000ae7130) (0xc001f837c0) Stream removed, broadcasting: 1 I0220 12:15:39.683927 8 log.go:172] (0xc000ae7130) (0xc0009cec80) Stream removed, broadcasting: 3 I0220 12:15:39.683944 8 log.go:172] (0xc000ae7130) (0xc0009ced20) Stream removed, broadcasting: 5 Feb 20 12:15:39.683: INFO: Exec stderr: "" Feb 20 12:15:39.684: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5svgw PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 20 12:15:39.684: INFO: >>> kubeConfig: /root/.kube/config I0220 12:15:39.751290 8 log.go:172] (0xc001048160) (0xc001c10960) Create stream I0220 12:15:39.751431 8 log.go:172] (0xc001048160) (0xc001c10960) Stream added, broadcasting: 1 I0220 12:15:39.754367 8 log.go:172] (0xc001048160) Reply frame received for 1 I0220 12:15:39.754411 8 log.go:172] (0xc001048160) (0xc0009cee60) Create stream I0220 12:15:39.754428 8 log.go:172] (0xc001048160) (0xc0009cee60) Stream added, broadcasting: 3 I0220 12:15:39.755497 8 log.go:172] (0xc001048160) Reply frame received for 3 I0220 12:15:39.755529 8 log.go:172] (0xc001048160) (0xc001d6dc20) Create stream I0220 12:15:39.755540 8 log.go:172] (0xc001048160) (0xc001d6dc20) Stream added, broadcasting: 5 I0220 12:15:39.757081 8 log.go:172] (0xc001048160) Reply frame received for 5 I0220 12:15:39.926210 8 log.go:172] (0xc001048160) Data frame received for 3 I0220 12:15:39.926272 8 log.go:172] (0xc0009cee60) (3) Data frame handling I0220 12:15:39.926282 8 log.go:172] (0xc0009cee60) (3) Data frame sent I0220 12:15:40.036184 8 log.go:172] (0xc001048160) Data frame received for 1 I0220 12:15:40.036353 8 log.go:172] (0xc001048160) (0xc0009cee60) Stream removed, broadcasting: 3 I0220 12:15:40.036392 8 log.go:172] (0xc001c10960) (1) Data frame handling I0220 12:15:40.036404 8 log.go:172] (0xc001c10960) (1) Data frame sent I0220 12:15:40.036489 8 log.go:172] (0xc001048160) (0xc001d6dc20) Stream removed, broadcasting: 5 I0220 12:15:40.036527 8 log.go:172] (0xc001048160) (0xc001c10960) Stream removed, broadcasting: 1 I0220 12:15:40.036552 8 log.go:172] (0xc001048160) Go away received I0220 12:15:40.036866 8 log.go:172] (0xc001048160) (0xc001c10960) Stream removed, broadcasting: 1 I0220 12:15:40.037078 8 log.go:172] (0xc001048160) (0xc0009cee60) Stream removed, broadcasting: 3 I0220 12:15:40.037145 8 log.go:172] (0xc001048160) (0xc001d6dc20) Stream removed, broadcasting: 5 Feb 20 12:15:40.037: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:15:40.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-5svgw" for this suite. Feb 20 12:16:36.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:16:36.239: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-5svgw, resource: bindings, ignored listing per whitelist Feb 20 12:16:36.297: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-5svgw deletion completed in 56.232944379s • [SLOW TEST:84.690 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:16:36.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 20 12:16:36.649: INFO: Waiting up to 5m0s for pod "downward-api-d72bfd2e-53da-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-tplk2" to be "success or failure" Feb 20 12:16:36.666: INFO: Pod "downward-api-d72bfd2e-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.108015ms Feb 20 12:16:38.722: INFO: Pod "downward-api-d72bfd2e-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073256281s Feb 20 12:16:40.772: INFO: Pod "downward-api-d72bfd2e-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12335114s Feb 20 12:16:42.788: INFO: Pod "downward-api-d72bfd2e-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139293212s Feb 20 12:16:44.809: INFO: Pod "downward-api-d72bfd2e-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160336075s Feb 20 12:16:46.832: INFO: Pod "downward-api-d72bfd2e-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.183230869s Feb 20 12:16:48.849: INFO: Pod "downward-api-d72bfd2e-53da-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.199777752s STEP: Saw pod success Feb 20 12:16:48.849: INFO: Pod "downward-api-d72bfd2e-53da-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:16:48.856: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-d72bfd2e-53da-11ea-bcb7-0242ac110008 container dapi-container: STEP: delete the pod Feb 20 12:16:48.964: INFO: Waiting for pod downward-api-d72bfd2e-53da-11ea-bcb7-0242ac110008 to disappear Feb 20 12:16:49.006: INFO: Pod downward-api-d72bfd2e-53da-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:16:49.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tplk2" for this suite. Feb 20 12:16:55.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:16:55.304: INFO: namespace: e2e-tests-downward-api-tplk2, resource: bindings, ignored listing per whitelist Feb 20 12:16:55.363: INFO: namespace e2e-tests-downward-api-tplk2 deletion completed in 6.343407897s • [SLOW TEST:19.066 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:16:55.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 12:16:55.526: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e26f88a0-53da-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-zqqk7" to be "success or failure" Feb 20 12:16:55.536: INFO: Pod "downwardapi-volume-e26f88a0-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07401ms Feb 20 12:16:57.548: INFO: Pod "downwardapi-volume-e26f88a0-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021488543s Feb 20 12:16:59.560: INFO: Pod "downwardapi-volume-e26f88a0-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033618364s Feb 20 12:17:01.578: INFO: Pod "downwardapi-volume-e26f88a0-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052171651s Feb 20 12:17:03.623: INFO: Pod "downwardapi-volume-e26f88a0-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096568966s Feb 20 12:17:05.961: INFO: Pod "downwardapi-volume-e26f88a0-53da-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.434420648s STEP: Saw pod success Feb 20 12:17:05.961: INFO: Pod "downwardapi-volume-e26f88a0-53da-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:17:05.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e26f88a0-53da-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 12:17:06.517: INFO: Waiting for pod downwardapi-volume-e26f88a0-53da-11ea-bcb7-0242ac110008 to disappear Feb 20 12:17:06.537: INFO: Pod downwardapi-volume-e26f88a0-53da-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:17:06.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zqqk7" for this suite. Feb 20 12:17:12.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:17:12.792: INFO: namespace: e2e-tests-projected-zqqk7, resource: bindings, ignored listing per whitelist Feb 20 12:17:12.815: INFO: namespace e2e-tests-projected-zqqk7 deletion completed in 6.261651s • [SLOW TEST:17.452 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:17:12.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Feb 20 12:17:12.992: INFO: Waiting up to 5m0s for pod "client-containers-ecd69632-53da-11ea-bcb7-0242ac110008" in namespace "e2e-tests-containers-gm7sq" to be "success or failure" Feb 20 12:17:13.011: INFO: Pod "client-containers-ecd69632-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.343573ms Feb 20 12:17:15.259: INFO: Pod "client-containers-ecd69632-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267560751s Feb 20 12:17:17.296: INFO: Pod "client-containers-ecd69632-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304619956s Feb 20 12:17:19.330: INFO: Pod "client-containers-ecd69632-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338662698s Feb 20 12:17:21.348: INFO: Pod "client-containers-ecd69632-53da-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356661072s Feb 20 12:17:23.370: INFO: Pod "client-containers-ecd69632-53da-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.378366785s STEP: Saw pod success Feb 20 12:17:23.370: INFO: Pod "client-containers-ecd69632-53da-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:17:23.378: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ecd69632-53da-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 12:17:23.936: INFO: Waiting for pod client-containers-ecd69632-53da-11ea-bcb7-0242ac110008 to disappear Feb 20 12:17:24.272: INFO: Pod client-containers-ecd69632-53da-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:17:24.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-gm7sq" for this suite. Feb 20 12:17:30.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:17:30.564: INFO: namespace: e2e-tests-containers-gm7sq, resource: bindings, ignored listing per whitelist Feb 20 12:17:30.621: INFO: namespace e2e-tests-containers-gm7sq deletion completed in 6.331127924s • [SLOW TEST:17.806 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:17:30.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:17:30.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-2gl4j" for this suite. Feb 20 12:17:37.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:17:37.173: INFO: namespace: e2e-tests-kubelet-test-2gl4j, resource: bindings, ignored listing per whitelist Feb 20 12:17:37.321: INFO: namespace e2e-tests-kubelet-test-2gl4j deletion completed in 6.308182272s • [SLOW TEST:6.700 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:17:37.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 20 12:17:37.579: INFO: Number of nodes with available pods: 0 Feb 20 12:17:37.579: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 12:17:38.600: INFO: Number of nodes with available pods: 0 Feb 20 12:17:38.600: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 12:17:39.775: INFO: Number of nodes with available pods: 0 Feb 20 12:17:39.775: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 12:17:40.623: INFO: Number of nodes with available pods: 0 Feb 20 12:17:40.623: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 12:17:41.606: INFO: Number of nodes with available pods: 0 Feb 20 12:17:41.606: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 12:17:43.219: INFO: Number of nodes with available pods: 0 Feb 20 12:17:43.219: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 12:17:43.721: INFO: Number of nodes with available pods: 0 Feb 20 12:17:43.721: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 12:17:44.626: INFO: Number of nodes with available pods: 0 Feb 20 12:17:44.626: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 12:17:45.670: INFO: Number of nodes with available pods: 0 Feb 20 12:17:45.670: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 12:17:46.649: INFO: Number of nodes with available pods: 0 Feb 20 12:17:46.650: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 20 12:17:47.675: INFO: Number of nodes with available pods: 1 Feb 20 12:17:47.675: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 20 12:17:47.731: INFO: Number of nodes with available pods: 1 Feb 20 12:17:47.731: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mwbmd, will wait for the garbage collector to delete the pods Feb 20 12:17:48.846: INFO: Deleting DaemonSet.extensions daemon-set took: 27.88725ms Feb 20 12:17:49.846: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000354604s Feb 20 12:17:51.093: INFO: Number of nodes with available pods: 0 Feb 20 12:17:51.093: INFO: Number of running nodes: 0, number of available pods: 0 Feb 20 12:17:51.097: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mwbmd/daemonsets","resourceVersion":"22310590"},"items":null} Feb 20 12:17:51.100: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mwbmd/pods","resourceVersion":"22310590"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:17:51.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-mwbmd" for this suite. Feb 20 12:17:57.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:17:57.300: INFO: namespace: e2e-tests-daemonsets-mwbmd, resource: bindings, ignored listing per whitelist Feb 20 12:17:57.313: INFO: namespace e2e-tests-daemonsets-mwbmd deletion completed in 6.200725678s • [SLOW TEST:19.992 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:17:57.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 12:17:57.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0767e0c6-53db-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-jt9qf" to be "success or failure" Feb 20 12:17:57.593: INFO: Pod "downwardapi-volume-0767e0c6-53db-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.920579ms Feb 20 12:17:59.609: INFO: Pod "downwardapi-volume-0767e0c6-53db-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045179645s Feb 20 12:18:02.240: INFO: Pod "downwardapi-volume-0767e0c6-53db-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.676619882s Feb 20 12:18:04.250: INFO: Pod "downwardapi-volume-0767e0c6-53db-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.686907937s Feb 20 12:18:06.263: INFO: Pod "downwardapi-volume-0767e0c6-53db-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.699377663s STEP: Saw pod success Feb 20 12:18:06.263: INFO: Pod "downwardapi-volume-0767e0c6-53db-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:18:06.266: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0767e0c6-53db-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 12:18:06.338: INFO: Waiting for pod downwardapi-volume-0767e0c6-53db-11ea-bcb7-0242ac110008 to disappear Feb 20 12:18:06.343: INFO: Pod downwardapi-volume-0767e0c6-53db-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:18:06.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jt9qf" for this suite. Feb 20 12:18:13.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:18:13.688: INFO: namespace: e2e-tests-downward-api-jt9qf, resource: bindings, ignored listing per whitelist Feb 20 12:18:13.934: INFO: namespace e2e-tests-downward-api-jt9qf deletion completed in 7.580022129s • [SLOW TEST:16.621 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:18:13.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Feb 20 12:18:14.229: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:18:14.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xbh7f" for this suite. Feb 20 12:18:20.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:18:20.687: INFO: namespace: e2e-tests-kubectl-xbh7f, resource: bindings, ignored listing per whitelist Feb 20 12:18:20.696: INFO: namespace e2e-tests-kubectl-xbh7f deletion completed in 6.37721803s • [SLOW TEST:6.761 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:18:20.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:18:33.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-vnrt7" for this suite. Feb 20 12:18:39.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:18:39.334: INFO: namespace: e2e-tests-emptydir-wrapper-vnrt7, resource: bindings, ignored listing per whitelist Feb 20 12:18:39.547: INFO: namespace e2e-tests-emptydir-wrapper-vnrt7 deletion completed in 6.353951234s • [SLOW TEST:18.851 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:18:39.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 20 12:18:40.884: INFO: Pod name wrapped-volume-race-2124a1c5-53db-11ea-bcb7-0242ac110008: Found 0 pods out of 5 Feb 20 12:18:45.909: INFO: Pod name wrapped-volume-race-2124a1c5-53db-11ea-bcb7-0242ac110008: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2124a1c5-53db-11ea-bcb7-0242ac110008 in namespace e2e-tests-emptydir-wrapper-dcqwp, will wait for the garbage collector to delete the pods Feb 20 12:20:50.176: INFO: Deleting ReplicationController wrapped-volume-race-2124a1c5-53db-11ea-bcb7-0242ac110008 took: 31.429191ms Feb 20 12:20:50.476: INFO: Terminating ReplicationController wrapped-volume-race-2124a1c5-53db-11ea-bcb7-0242ac110008 pods took: 300.503671ms STEP: Creating RC which spawns configmap-volume pods Feb 20 12:21:43.056: INFO: Pod name wrapped-volume-race-8da832aa-53db-11ea-bcb7-0242ac110008: Found 0 pods out of 5 Feb 20 12:21:48.084: INFO: Pod name wrapped-volume-race-8da832aa-53db-11ea-bcb7-0242ac110008: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8da832aa-53db-11ea-bcb7-0242ac110008 in namespace e2e-tests-emptydir-wrapper-dcqwp, will wait for the garbage collector to delete the pods Feb 20 12:24:12.285: INFO: Deleting ReplicationController wrapped-volume-race-8da832aa-53db-11ea-bcb7-0242ac110008 took: 37.672107ms Feb 20 12:24:12.685: INFO: Terminating ReplicationController wrapped-volume-race-8da832aa-53db-11ea-bcb7-0242ac110008 pods took: 400.44636ms STEP: Creating RC which spawns configmap-volume pods Feb 20 12:25:03.870: INFO: Pod name wrapped-volume-race-0568638e-53dc-11ea-bcb7-0242ac110008: Found 0 pods out of 5 Feb 20 12:25:08.908: INFO: Pod name wrapped-volume-race-0568638e-53dc-11ea-bcb7-0242ac110008: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0568638e-53dc-11ea-bcb7-0242ac110008 in namespace e2e-tests-emptydir-wrapper-dcqwp, will wait for the garbage collector to delete the pods Feb 20 12:27:13.220: INFO: Deleting ReplicationController wrapped-volume-race-0568638e-53dc-11ea-bcb7-0242ac110008 took: 15.57483ms Feb 20 12:27:13.620: INFO: Terminating ReplicationController wrapped-volume-race-0568638e-53dc-11ea-bcb7-0242ac110008 pods took: 400.460649ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:28:05.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-dcqwp" for this suite. Feb 20 12:28:15.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:28:15.400: INFO: namespace: e2e-tests-emptydir-wrapper-dcqwp, resource: bindings, ignored listing per whitelist Feb 20 12:28:15.434: INFO: namespace e2e-tests-emptydir-wrapper-dcqwp deletion completed in 10.343293071s • [SLOW TEST:575.887 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:28:15.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-77e1ae8a-53dc-11ea-bcb7-0242ac110008 STEP: Creating configMap with name cm-test-opt-upd-77e1af54-53dc-11ea-bcb7-0242ac110008 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-77e1ae8a-53dc-11ea-bcb7-0242ac110008 STEP: Updating configmap cm-test-opt-upd-77e1af54-53dc-11ea-bcb7-0242ac110008 STEP: Creating configMap with name cm-test-opt-create-77e1afe7-53dc-11ea-bcb7-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:28:40.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mrx9l" for this suite. Feb 20 12:29:06.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:29:06.559: INFO: namespace: e2e-tests-projected-mrx9l, resource: bindings, ignored listing per whitelist Feb 20 12:29:06.707: INFO: namespace e2e-tests-projected-mrx9l deletion completed in 26.289429086s • [SLOW TEST:51.273 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:29:06.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-96604d67-53dc-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 12:29:06.933: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9662f58b-53dc-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-9vv5w" to be "success or failure" Feb 20 12:29:06.947: INFO: Pod "pod-projected-configmaps-9662f58b-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.287049ms Feb 20 12:29:08.982: INFO: Pod "pod-projected-configmaps-9662f58b-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04934753s Feb 20 12:29:11.015: INFO: Pod "pod-projected-configmaps-9662f58b-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082073462s Feb 20 12:29:13.044: INFO: Pod "pod-projected-configmaps-9662f58b-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110946596s Feb 20 12:29:15.056: INFO: Pod "pod-projected-configmaps-9662f58b-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122660775s Feb 20 12:29:17.078: INFO: Pod "pod-projected-configmaps-9662f58b-53dc-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.144991615s STEP: Saw pod success Feb 20 12:29:17.078: INFO: Pod "pod-projected-configmaps-9662f58b-53dc-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:29:17.087: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-9662f58b-53dc-11ea-bcb7-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 20 12:29:17.213: INFO: Waiting for pod pod-projected-configmaps-9662f58b-53dc-11ea-bcb7-0242ac110008 to disappear Feb 20 12:29:17.297: INFO: Pod pod-projected-configmaps-9662f58b-53dc-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:29:17.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9vv5w" for this suite. Feb 20 12:29:23.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:29:23.556: INFO: namespace: e2e-tests-projected-9vv5w, resource: bindings, ignored listing per whitelist Feb 20 12:29:23.701: INFO: namespace e2e-tests-projected-9vv5w deletion completed in 6.382194114s • [SLOW TEST:16.994 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:29:23.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 20 12:29:24.196: INFO: Waiting up to 5m0s for pod "pod-a08ed11d-53dc-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-xtq2j" to be "success or failure" Feb 20 12:29:24.377: INFO: Pod "pod-a08ed11d-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 180.935267ms Feb 20 12:29:26.511: INFO: Pod "pod-a08ed11d-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314261518s Feb 20 12:29:28.553: INFO: Pod "pod-a08ed11d-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356146182s Feb 20 12:29:30.577: INFO: Pod "pod-a08ed11d-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.381027184s Feb 20 12:29:32.613: INFO: Pod "pod-a08ed11d-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.416973957s Feb 20 12:29:34.651: INFO: Pod "pod-a08ed11d-53dc-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.454298498s STEP: Saw pod success Feb 20 12:29:34.651: INFO: Pod "pod-a08ed11d-53dc-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:29:34.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a08ed11d-53dc-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 12:29:34.772: INFO: Waiting for pod pod-a08ed11d-53dc-11ea-bcb7-0242ac110008 to disappear Feb 20 12:29:35.016: INFO: Pod pod-a08ed11d-53dc-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:29:35.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xtq2j" for this suite. Feb 20 12:29:43.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:29:43.132: INFO: namespace: e2e-tests-emptydir-xtq2j, resource: bindings, ignored listing per whitelist Feb 20 12:29:43.188: INFO: namespace e2e-tests-emptydir-xtq2j deletion completed in 8.16163095s • [SLOW TEST:19.486 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:29:43.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 20 12:29:54.262: INFO: Successfully updated pod "labelsupdateac3713eb-53dc-11ea-bcb7-0242ac110008" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:29:56.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bbmlw" for this suite. Feb 20 12:30:20.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:30:20.582: INFO: namespace: e2e-tests-projected-bbmlw, resource: bindings, ignored listing per whitelist Feb 20 12:30:20.609: INFO: namespace e2e-tests-projected-bbmlw deletion completed in 24.241567082s • [SLOW TEST:37.420 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:30:20.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 12:30:20.932: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c27be3cb-53dc-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-6zxmg" to be "success or failure" Feb 20 12:30:20.961: INFO: Pod "downwardapi-volume-c27be3cb-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.018904ms Feb 20 12:30:23.070: INFO: Pod "downwardapi-volume-c27be3cb-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137894633s Feb 20 12:30:25.084: INFO: Pod "downwardapi-volume-c27be3cb-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151848024s Feb 20 12:30:27.336: INFO: Pod "downwardapi-volume-c27be3cb-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.403692389s Feb 20 12:30:29.393: INFO: Pod "downwardapi-volume-c27be3cb-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.461325078s Feb 20 12:30:31.465: INFO: Pod "downwardapi-volume-c27be3cb-53dc-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.53300323s STEP: Saw pod success Feb 20 12:30:31.465: INFO: Pod "downwardapi-volume-c27be3cb-53dc-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:30:31.492: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c27be3cb-53dc-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 12:30:31.773: INFO: Waiting for pod downwardapi-volume-c27be3cb-53dc-11ea-bcb7-0242ac110008 to disappear Feb 20 12:30:31.834: INFO: Pod downwardapi-volume-c27be3cb-53dc-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:30:31.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6zxmg" for this suite. Feb 20 12:30:37.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:30:38.034: INFO: namespace: e2e-tests-downward-api-6zxmg, resource: bindings, ignored listing per whitelist Feb 20 12:30:38.079: INFO: namespace e2e-tests-downward-api-6zxmg deletion completed in 6.175836129s • [SLOW TEST:17.470 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:30:38.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-ccdb5308-53dc-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 12:30:38.324: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ccdc4cd8-53dc-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-dsnzr" to be "success or failure" Feb 20 12:30:38.403: INFO: Pod "pod-projected-secrets-ccdc4cd8-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 78.704038ms Feb 20 12:30:40.444: INFO: Pod "pod-projected-secrets-ccdc4cd8-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120113179s Feb 20 12:30:42.502: INFO: Pod "pod-projected-secrets-ccdc4cd8-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17753559s Feb 20 12:30:45.096: INFO: Pod "pod-projected-secrets-ccdc4cd8-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.772040272s Feb 20 12:30:47.184: INFO: Pod "pod-projected-secrets-ccdc4cd8-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.860440921s Feb 20 12:30:49.478: INFO: Pod "pod-projected-secrets-ccdc4cd8-53dc-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.153521445s STEP: Saw pod success Feb 20 12:30:49.478: INFO: Pod "pod-projected-secrets-ccdc4cd8-53dc-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:30:49.499: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ccdc4cd8-53dc-11ea-bcb7-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 20 12:30:49.752: INFO: Waiting for pod pod-projected-secrets-ccdc4cd8-53dc-11ea-bcb7-0242ac110008 to disappear Feb 20 12:30:49.762: INFO: Pod pod-projected-secrets-ccdc4cd8-53dc-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:30:49.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dsnzr" for this suite. Feb 20 12:30:55.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:30:55.938: INFO: namespace: e2e-tests-projected-dsnzr, resource: bindings, ignored listing per whitelist Feb 20 12:30:56.073: INFO: namespace e2e-tests-projected-dsnzr deletion completed in 6.243553875s • [SLOW TEST:17.994 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:30:56.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 20 12:30:56.392: INFO: Waiting up to 5m0s for pod "pod-d79f910e-53dc-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-m4mrg" to be "success or failure" Feb 20 12:30:56.402: INFO: Pod "pod-d79f910e-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.871884ms Feb 20 12:30:58.419: INFO: Pod "pod-d79f910e-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027727432s Feb 20 12:31:00.434: INFO: Pod "pod-d79f910e-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041822076s Feb 20 12:31:02.753: INFO: Pod "pod-d79f910e-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.360779116s Feb 20 12:31:04.801: INFO: Pod "pod-d79f910e-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.409447918s Feb 20 12:31:07.193: INFO: Pod "pod-d79f910e-53dc-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.801628082s STEP: Saw pod success Feb 20 12:31:07.193: INFO: Pod "pod-d79f910e-53dc-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:31:07.199: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d79f910e-53dc-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 12:31:07.598: INFO: Waiting for pod pod-d79f910e-53dc-11ea-bcb7-0242ac110008 to disappear Feb 20 12:31:07.814: INFO: Pod pod-d79f910e-53dc-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:31:07.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-m4mrg" for this suite. Feb 20 12:31:14.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:31:14.085: INFO: namespace: e2e-tests-emptydir-m4mrg, resource: bindings, ignored listing per whitelist Feb 20 12:31:14.350: INFO: namespace e2e-tests-emptydir-m4mrg deletion completed in 6.511212374s • [SLOW TEST:18.276 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:31:14.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 20 12:31:14.764: INFO: namespace e2e-tests-kubectl-r2kgg Feb 20 12:31:14.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r2kgg' Feb 20 12:31:16.930: INFO: stderr: "" Feb 20 12:31:16.930: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 20 12:31:17.952: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:31:17.952: INFO: Found 0 / 1 Feb 20 12:31:18.948: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:31:18.949: INFO: Found 0 / 1 Feb 20 12:31:20.058: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:31:20.058: INFO: Found 0 / 1 Feb 20 12:31:20.943: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:31:20.943: INFO: Found 0 / 1 Feb 20 12:31:22.295: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:31:22.295: INFO: Found 0 / 1 Feb 20 12:31:22.975: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:31:22.975: INFO: Found 0 / 1 Feb 20 12:31:23.949: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:31:23.949: INFO: Found 0 / 1 Feb 20 12:31:24.982: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:31:24.982: INFO: Found 0 / 1 Feb 20 12:31:25.962: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:31:25.962: INFO: Found 0 / 1 Feb 20 12:31:26.951: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:31:26.951: INFO: Found 1 / 1 Feb 20 12:31:26.951: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 20 12:31:26.957: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:31:26.957: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 20 12:31:26.957: INFO: wait on redis-master startup in e2e-tests-kubectl-r2kgg Feb 20 12:31:26.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ltznm redis-master --namespace=e2e-tests-kubectl-r2kgg' Feb 20 12:31:27.143: INFO: stderr: "" Feb 20 12:31:27.143: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Feb 12:31:25.056 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Feb 12:31:25.057 # Server started, Redis version 3.2.12\n1:M 20 Feb 12:31:25.057 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Feb 12:31:25.057 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 20 12:31:27.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-r2kgg' Feb 20 12:31:27.382: INFO: stderr: "" Feb 20 12:31:27.382: INFO: stdout: "service/rm2 exposed\n" Feb 20 12:31:27.389: INFO: Service rm2 in namespace e2e-tests-kubectl-r2kgg found. STEP: exposing service Feb 20 12:31:29.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-r2kgg' Feb 20 12:31:29.761: INFO: stderr: "" Feb 20 12:31:29.761: INFO: stdout: "service/rm3 exposed\n" Feb 20 12:31:29.809: INFO: Service rm3 in namespace e2e-tests-kubectl-r2kgg found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:31:31.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r2kgg" for this suite. Feb 20 12:31:58.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:31:58.272: INFO: namespace: e2e-tests-kubectl-r2kgg, resource: bindings, ignored listing per whitelist Feb 20 12:31:58.307: INFO: namespace e2e-tests-kubectl-r2kgg deletion completed in 26.455418332s • [SLOW TEST:43.956 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:31:58.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 20 12:31:58.579: INFO: Waiting up to 5m0s for pod "pod-fca076b5-53dc-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-pcn9w" to be "success or failure" Feb 20 12:31:58.607: INFO: Pod "pod-fca076b5-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 27.439005ms Feb 20 12:32:00.646: INFO: Pod "pod-fca076b5-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0666853s Feb 20 12:32:02.694: INFO: Pod "pod-fca076b5-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114944855s Feb 20 12:32:04.796: INFO: Pod "pod-fca076b5-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216842955s Feb 20 12:32:06.821: INFO: Pod "pod-fca076b5-53dc-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241383098s Feb 20 12:32:08.843: INFO: Pod "pod-fca076b5-53dc-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.263797917s STEP: Saw pod success Feb 20 12:32:08.843: INFO: Pod "pod-fca076b5-53dc-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:32:08.863: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fca076b5-53dc-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 12:32:09.116: INFO: Waiting for pod pod-fca076b5-53dc-11ea-bcb7-0242ac110008 to disappear Feb 20 12:32:09.290: INFO: Pod pod-fca076b5-53dc-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:32:09.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pcn9w" for this suite. Feb 20 12:32:16.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:32:16.326: INFO: namespace: e2e-tests-emptydir-pcn9w, resource: bindings, ignored listing per whitelist Feb 20 12:32:16.397: INFO: namespace e2e-tests-emptydir-pcn9w deletion completed in 7.091962399s • [SLOW TEST:18.090 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:32:16.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Feb 20 12:32:16.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6m4jg' Feb 20 12:32:17.338: INFO: stderr: "" Feb 20 12:32:17.338: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Feb 20 12:32:18.363: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:32:18.364: INFO: Found 0 / 1 Feb 20 12:32:19.799: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:32:19.799: INFO: Found 0 / 1 Feb 20 12:32:20.357: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:32:20.357: INFO: Found 0 / 1 Feb 20 12:32:21.348: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:32:21.348: INFO: Found 0 / 1 Feb 20 12:32:23.244: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:32:23.244: INFO: Found 0 / 1 Feb 20 12:32:23.551: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:32:23.551: INFO: Found 0 / 1 Feb 20 12:32:24.352: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:32:24.352: INFO: Found 0 / 1 Feb 20 12:32:25.422: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:32:25.422: INFO: Found 0 / 1 Feb 20 12:32:26.348: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:32:26.348: INFO: Found 1 / 1 Feb 20 12:32:26.349: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 20 12:32:26.356: INFO: Selector matched 1 pods for map[app:redis] Feb 20 12:32:26.356: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 20 12:32:26.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rw8zs redis-master --namespace=e2e-tests-kubectl-6m4jg' Feb 20 12:32:26.656: INFO: stderr: "" Feb 20 12:32:26.656: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Feb 12:32:25.525 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Feb 12:32:25.525 # Server started, Redis version 3.2.12\n1:M 20 Feb 12:32:25.525 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Feb 12:32:25.525 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 20 12:32:26.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rw8zs redis-master --namespace=e2e-tests-kubectl-6m4jg --tail=1' Feb 20 12:32:26.856: INFO: stderr: "" Feb 20 12:32:26.856: INFO: stdout: "1:M 20 Feb 12:32:25.525 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 20 12:32:26.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rw8zs redis-master --namespace=e2e-tests-kubectl-6m4jg --limit-bytes=1' Feb 20 12:32:26.979: INFO: stderr: "" Feb 20 12:32:26.979: INFO: stdout: " " STEP: exposing timestamps Feb 20 12:32:26.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rw8zs redis-master --namespace=e2e-tests-kubectl-6m4jg --tail=1 --timestamps' Feb 20 12:32:27.144: INFO: stderr: "" Feb 20 12:32:27.144: INFO: stdout: "2020-02-20T12:32:25.525702741Z 1:M 20 Feb 12:32:25.525 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 20 12:32:29.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rw8zs redis-master --namespace=e2e-tests-kubectl-6m4jg --since=1s' Feb 20 12:32:29.819: INFO: stderr: "" Feb 20 12:32:29.819: INFO: stdout: "" Feb 20 12:32:29.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rw8zs redis-master --namespace=e2e-tests-kubectl-6m4jg --since=24h' Feb 20 12:32:29.971: INFO: stderr: "" Feb 20 12:32:29.971: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Feb 12:32:25.525 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Feb 12:32:25.525 # Server started, Redis version 3.2.12\n1:M 20 Feb 12:32:25.525 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Feb 12:32:25.525 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Feb 20 12:32:29.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6m4jg' Feb 20 12:32:30.080: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 20 12:32:30.080: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 20 12:32:30.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-6m4jg' Feb 20 12:32:30.234: INFO: stderr: "No resources found.\n" Feb 20 12:32:30.234: INFO: stdout: "" Feb 20 12:32:30.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-6m4jg -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 20 12:32:30.361: INFO: stderr: "" Feb 20 12:32:30.361: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:32:30.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6m4jg" for this suite. Feb 20 12:32:52.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:32:52.744: INFO: namespace: e2e-tests-kubectl-6m4jg, resource: bindings, ignored listing per whitelist Feb 20 12:32:52.782: INFO: namespace e2e-tests-kubectl-6m4jg deletion completed in 22.400929975s • [SLOW TEST:36.385 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:32:52.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Feb 20 12:32:53.556: INFO: Waiting up to 5m0s for pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2" in namespace "e2e-tests-svcaccounts-ggrpz" to be "success or failure" Feb 20 12:32:53.570: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.808578ms Feb 20 12:32:55.586: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029600059s Feb 20 12:32:57.607: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050682878s Feb 20 12:33:00.257: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.701096123s Feb 20 12:33:02.280: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724436574s Feb 20 12:33:04.303: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.746627102s Feb 20 12:33:06.319: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.763460092s Feb 20 12:33:08.340: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.784354511s Feb 20 12:33:10.368: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.811937551s STEP: Saw pod success Feb 20 12:33:10.368: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2" satisfied condition "success or failure" Feb 20 12:33:10.378: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2 container token-test: STEP: delete the pod Feb 20 12:33:10.565: INFO: Waiting for pod pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2 to disappear Feb 20 12:33:10.589: INFO: Pod pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-98vl2 no longer exists STEP: Creating a pod to test consume service account root CA Feb 20 12:33:10.608: INFO: Waiting up to 5m0s for pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s" in namespace "e2e-tests-svcaccounts-ggrpz" to be "success or failure" Feb 20 12:33:10.753: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s": Phase="Pending", Reason="", readiness=false. Elapsed: 145.110085ms Feb 20 12:33:12.805: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197111619s Feb 20 12:33:14.816: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20804293s Feb 20 12:33:16.894: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.285535476s Feb 20 12:33:18.916: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.307146942s Feb 20 12:33:20.939: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.33037431s Feb 20 12:33:23.121: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.512474506s Feb 20 12:33:25.684: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s": Phase="Running", Reason="", readiness=false. Elapsed: 15.07608091s Feb 20 12:33:27.696: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.087168902s STEP: Saw pod success Feb 20 12:33:27.696: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s" satisfied condition "success or failure" Feb 20 12:33:27.700: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s container root-ca-test: STEP: delete the pod Feb 20 12:33:29.012: INFO: Waiting for pod pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s to disappear Feb 20 12:33:29.029: INFO: Pod pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-9wl2s no longer exists STEP: Creating a pod to test consume service account namespace Feb 20 12:33:29.068: INFO: Waiting up to 5m0s for pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v" in namespace "e2e-tests-svcaccounts-ggrpz" to be "success or failure" Feb 20 12:33:29.140: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v": Phase="Pending", Reason="", readiness=false. Elapsed: 71.782597ms Feb 20 12:33:31.166: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098000832s Feb 20 12:33:33.908: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.840226063s Feb 20 12:33:35.929: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.860866335s Feb 20 12:33:37.985: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.917646593s Feb 20 12:33:40.315: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v": Phase="Pending", Reason="", readiness=false. Elapsed: 11.247510877s Feb 20 12:33:42.353: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v": Phase="Pending", Reason="", readiness=false. Elapsed: 13.285221799s Feb 20 12:33:44.407: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.339454293s STEP: Saw pod success Feb 20 12:33:44.407: INFO: Pod "pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v" satisfied condition "success or failure" Feb 20 12:33:44.424: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v container namespace-test: STEP: delete the pod Feb 20 12:33:44.769: INFO: Waiting for pod pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v to disappear Feb 20 12:33:44.788: INFO: Pod pod-service-account-1d74ebbf-53dd-11ea-bcb7-0242ac110008-wf82v no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:33:44.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-ggrpz" for this suite. Feb 20 12:33:52.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:33:53.060: INFO: namespace: e2e-tests-svcaccounts-ggrpz, resource: bindings, ignored listing per whitelist Feb 20 12:33:53.081: INFO: namespace e2e-tests-svcaccounts-ggrpz deletion completed in 8.188320614s • [SLOW TEST:60.299 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:33:53.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-kgxw STEP: Creating a pod to test atomic-volume-subpath Feb 20 12:33:53.297: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kgxw" in namespace "e2e-tests-subpath-f87rl" to be "success or failure" Feb 20 12:33:53.377: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Pending", Reason="", readiness=false. Elapsed: 79.582335ms Feb 20 12:33:55.517: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219304781s Feb 20 12:33:57.530: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232639679s Feb 20 12:33:59.762: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464574437s Feb 20 12:34:02.104: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.806886255s Feb 20 12:34:04.311: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Pending", Reason="", readiness=false. Elapsed: 11.01348053s Feb 20 12:34:06.343: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Pending", Reason="", readiness=false. Elapsed: 13.046033482s Feb 20 12:34:08.372: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Pending", Reason="", readiness=false. Elapsed: 15.074460382s Feb 20 12:34:10.409: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Running", Reason="", readiness=false. Elapsed: 17.11131543s Feb 20 12:34:12.428: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Running", Reason="", readiness=false. Elapsed: 19.130316164s Feb 20 12:34:14.443: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Running", Reason="", readiness=false. Elapsed: 21.145892949s Feb 20 12:34:16.513: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Running", Reason="", readiness=false. Elapsed: 23.215197171s Feb 20 12:34:18.541: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Running", Reason="", readiness=false. Elapsed: 25.243268031s Feb 20 12:34:20.568: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Running", Reason="", readiness=false. Elapsed: 27.270781176s Feb 20 12:34:22.587: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Running", Reason="", readiness=false. Elapsed: 29.290073423s Feb 20 12:34:24.603: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Running", Reason="", readiness=false. Elapsed: 31.306102159s Feb 20 12:34:26.732: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Running", Reason="", readiness=false. Elapsed: 33.434450439s Feb 20 12:34:28.746: INFO: Pod "pod-subpath-test-configmap-kgxw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.44882319s STEP: Saw pod success Feb 20 12:34:28.746: INFO: Pod "pod-subpath-test-configmap-kgxw" satisfied condition "success or failure" Feb 20 12:34:28.751: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-kgxw container test-container-subpath-configmap-kgxw: STEP: delete the pod Feb 20 12:34:29.382: INFO: Waiting for pod pod-subpath-test-configmap-kgxw to disappear Feb 20 12:34:29.774: INFO: Pod pod-subpath-test-configmap-kgxw no longer exists STEP: Deleting pod pod-subpath-test-configmap-kgxw Feb 20 12:34:29.774: INFO: Deleting pod "pod-subpath-test-configmap-kgxw" in namespace "e2e-tests-subpath-f87rl" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:34:29.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-f87rl" for this suite. Feb 20 12:34:37.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:34:38.047: INFO: namespace: e2e-tests-subpath-f87rl, resource: bindings, ignored listing per whitelist Feb 20 12:34:38.103: INFO: namespace e2e-tests-subpath-f87rl deletion completed in 8.292940686s • [SLOW TEST:45.021 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:34:38.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 20 12:34:51.411: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:34:52.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-qg6vg" for this suite. Feb 20 12:35:19.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:35:19.234: INFO: namespace: e2e-tests-replicaset-qg6vg, resource: bindings, ignored listing per whitelist Feb 20 12:35:19.305: INFO: namespace e2e-tests-replicaset-qg6vg deletion completed in 26.822500354s • [SLOW TEST:41.202 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:35:19.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Feb 20 12:35:19.540: INFO: Waiting up to 5m0s for pod "client-containers-7478ec1a-53dd-11ea-bcb7-0242ac110008" in namespace "e2e-tests-containers-gwv4l" to be "success or failure" Feb 20 12:35:19.563: INFO: Pod "client-containers-7478ec1a-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 22.886123ms Feb 20 12:35:22.210: INFO: Pod "client-containers-7478ec1a-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.669195877s Feb 20 12:35:24.256: INFO: Pod "client-containers-7478ec1a-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.716090187s Feb 20 12:35:26.998: INFO: Pod "client-containers-7478ec1a-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.45766891s Feb 20 12:35:29.017: INFO: Pod "client-containers-7478ec1a-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.476267163s Feb 20 12:35:31.030: INFO: Pod "client-containers-7478ec1a-53dd-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.489988823s STEP: Saw pod success Feb 20 12:35:31.030: INFO: Pod "client-containers-7478ec1a-53dd-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:35:31.043: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-7478ec1a-53dd-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 12:35:31.264: INFO: Waiting for pod client-containers-7478ec1a-53dd-11ea-bcb7-0242ac110008 to disappear Feb 20 12:35:31.278: INFO: Pod client-containers-7478ec1a-53dd-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:35:31.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-gwv4l" for this suite. Feb 20 12:35:37.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:35:37.482: INFO: namespace: e2e-tests-containers-gwv4l, resource: bindings, ignored listing per whitelist Feb 20 12:35:37.522: INFO: namespace e2e-tests-containers-gwv4l deletion completed in 6.236573903s • [SLOW TEST:18.217 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:35:37.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 12:35:37.977: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f6610d6-53dd-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-w87w8" to be "success or failure" Feb 20 12:35:37.989: INFO: Pod "downwardapi-volume-7f6610d6-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.455137ms Feb 20 12:35:40.014: INFO: Pod "downwardapi-volume-7f6610d6-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03727527s Feb 20 12:35:42.043: INFO: Pod "downwardapi-volume-7f6610d6-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065447139s Feb 20 12:35:44.090: INFO: Pod "downwardapi-volume-7f6610d6-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112776443s Feb 20 12:35:46.100: INFO: Pod "downwardapi-volume-7f6610d6-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122878103s Feb 20 12:35:48.108: INFO: Pod "downwardapi-volume-7f6610d6-53dd-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131412246s STEP: Saw pod success Feb 20 12:35:48.109: INFO: Pod "downwardapi-volume-7f6610d6-53dd-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:35:48.112: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7f6610d6-53dd-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 12:35:49.026: INFO: Waiting for pod downwardapi-volume-7f6610d6-53dd-11ea-bcb7-0242ac110008 to disappear Feb 20 12:35:49.317: INFO: Pod downwardapi-volume-7f6610d6-53dd-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:35:49.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-w87w8" for this suite. Feb 20 12:35:55.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:35:55.655: INFO: namespace: e2e-tests-projected-w87w8, resource: bindings, ignored listing per whitelist Feb 20 12:35:55.668: INFO: namespace e2e-tests-projected-w87w8 deletion completed in 6.332779907s • [SLOW TEST:18.146 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:35:55.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 12:35:55.981: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:35:57.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-qfqc6" for this suite. Feb 20 12:36:03.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:36:03.458: INFO: namespace: e2e-tests-custom-resource-definition-qfqc6, resource: bindings, ignored listing per whitelist Feb 20 12:36:03.588: INFO: namespace e2e-tests-custom-resource-definition-qfqc6 deletion completed in 6.301405145s • [SLOW TEST:7.920 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:36:03.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:36:14.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-fzkkm" for this suite. Feb 20 12:36:56.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:36:56.325: INFO: namespace: e2e-tests-kubelet-test-fzkkm, resource: bindings, ignored listing per whitelist Feb 20 12:36:56.369: INFO: namespace e2e-tests-kubelet-test-fzkkm deletion completed in 42.137040448s • [SLOW TEST:52.781 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:36:56.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 20 12:36:56.827: INFO: Waiting up to 5m0s for pod "pod-ae766526-53dd-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-pln6d" to be "success or failure" Feb 20 12:36:56.872: INFO: Pod "pod-ae766526-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 44.473976ms Feb 20 12:36:58.890: INFO: Pod "pod-ae766526-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06260353s Feb 20 12:37:00.916: INFO: Pod "pod-ae766526-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088151837s Feb 20 12:37:02.931: INFO: Pod "pod-ae766526-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103510799s Feb 20 12:37:05.836: INFO: Pod "pod-ae766526-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.008911223s Feb 20 12:37:07.851: INFO: Pod "pod-ae766526-53dd-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.023846488s STEP: Saw pod success Feb 20 12:37:07.851: INFO: Pod "pod-ae766526-53dd-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:37:07.861: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ae766526-53dd-11ea-bcb7-0242ac110008 container test-container: STEP: delete the pod Feb 20 12:37:09.474: INFO: Waiting for pod pod-ae766526-53dd-11ea-bcb7-0242ac110008 to disappear Feb 20 12:37:09.504: INFO: Pod pod-ae766526-53dd-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:37:09.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pln6d" for this suite. Feb 20 12:37:15.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:37:15.895: INFO: namespace: e2e-tests-emptydir-pln6d, resource: bindings, ignored listing per whitelist Feb 20 12:37:15.919: INFO: namespace e2e-tests-emptydir-pln6d deletion completed in 6.399045375s • [SLOW TEST:19.550 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:37:15.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-ba06aef9-53dd-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 20 12:37:16.245: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ba07f2b0-53dd-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-jfwd4" to be "success or failure" Feb 20 12:37:16.265: INFO: Pod "pod-projected-configmaps-ba07f2b0-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.696257ms Feb 20 12:37:18.326: INFO: Pod "pod-projected-configmaps-ba07f2b0-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081046446s Feb 20 12:37:20.338: INFO: Pod "pod-projected-configmaps-ba07f2b0-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093337954s Feb 20 12:37:22.353: INFO: Pod "pod-projected-configmaps-ba07f2b0-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108585516s Feb 20 12:37:25.230: INFO: Pod "pod-projected-configmaps-ba07f2b0-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.984978958s Feb 20 12:37:27.818: INFO: Pod "pod-projected-configmaps-ba07f2b0-53dd-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.573456613s STEP: Saw pod success Feb 20 12:37:27.818: INFO: Pod "pod-projected-configmaps-ba07f2b0-53dd-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:37:27.837: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ba07f2b0-53dd-11ea-bcb7-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 20 12:37:28.174: INFO: Waiting for pod pod-projected-configmaps-ba07f2b0-53dd-11ea-bcb7-0242ac110008 to disappear Feb 20 12:37:28.230: INFO: Pod pod-projected-configmaps-ba07f2b0-53dd-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:37:28.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jfwd4" for this suite. Feb 20 12:37:34.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:37:34.655: INFO: namespace: e2e-tests-projected-jfwd4, resource: bindings, ignored listing per whitelist Feb 20 12:37:34.715: INFO: namespace e2e-tests-projected-jfwd4 deletion completed in 6.471845414s • [SLOW TEST:18.795 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:37:34.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Feb 20 12:37:34.839: INFO: Waiting up to 5m0s for pod "var-expansion-c520071f-53dd-11ea-bcb7-0242ac110008" in namespace "e2e-tests-var-expansion-2x8vq" to be "success or failure" Feb 20 12:37:34.950: INFO: Pod "var-expansion-c520071f-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 110.744937ms Feb 20 12:37:37.029: INFO: Pod "var-expansion-c520071f-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189540817s Feb 20 12:37:39.084: INFO: Pod "var-expansion-c520071f-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245442759s Feb 20 12:37:41.162: INFO: Pod "var-expansion-c520071f-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323265259s Feb 20 12:37:43.182: INFO: Pod "var-expansion-c520071f-53dd-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.343210815s Feb 20 12:37:45.275: INFO: Pod "var-expansion-c520071f-53dd-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.435793749s STEP: Saw pod success Feb 20 12:37:45.275: INFO: Pod "var-expansion-c520071f-53dd-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:37:45.329: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-c520071f-53dd-11ea-bcb7-0242ac110008 container dapi-container: STEP: delete the pod Feb 20 12:37:45.619: INFO: Waiting for pod var-expansion-c520071f-53dd-11ea-bcb7-0242ac110008 to disappear Feb 20 12:37:45.635: INFO: Pod var-expansion-c520071f-53dd-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:37:45.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-2x8vq" for this suite. Feb 20 12:37:51.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:37:52.056: INFO: namespace: e2e-tests-var-expansion-2x8vq, resource: bindings, ignored listing per whitelist Feb 20 12:37:52.120: INFO: namespace e2e-tests-var-expansion-2x8vq deletion completed in 6.245474036s • [SLOW TEST:17.405 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:37:52.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 12:37:52.325: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 20 12:37:57.354: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 20 12:38:03.382: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 20 12:38:05.393: INFO: Creating deployment "test-rollover-deployment" Feb 20 12:38:05.415: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 20 12:38:07.809: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 20 12:38:07.828: INFO: Ensure that both replica sets have 1 created replica Feb 20 12:38:07.848: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 20 12:38:07.997: INFO: Updating deployment test-rollover-deployment Feb 20 12:38:07.997: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 20 12:38:10.323: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 20 12:38:10.681: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 20 12:38:10.691: INFO: all replica sets need to contain the pod-template-hash label Feb 20 12:38:10.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799088, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 12:38:12.762: INFO: all replica sets need to contain the pod-template-hash label Feb 20 12:38:12.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799088, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 12:38:15.451: INFO: all replica sets need to contain the pod-template-hash label Feb 20 12:38:15.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799088, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 12:38:16.893: INFO: all replica sets need to contain the pod-template-hash label Feb 20 12:38:16.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799088, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 12:38:18.739: INFO: all replica sets need to contain the pod-template-hash label Feb 20 12:38:18.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799097, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 12:38:20.761: INFO: all replica sets need to contain the pod-template-hash label Feb 20 12:38:20.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799097, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 12:38:22.739: INFO: all replica sets need to contain the pod-template-hash label Feb 20 12:38:22.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799097, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 12:38:24.756: INFO: all replica sets need to contain the pod-template-hash label Feb 20 12:38:24.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799097, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 12:38:26.826: INFO: all replica sets need to contain the pod-template-hash label Feb 20 12:38:26.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799097, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799085, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 20 12:38:28.717: INFO: Feb 20 12:38:28.717: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 20 12:38:28.727: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-kgxzl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kgxzl/deployments/test-rollover-deployment,UID:d7584b27-53dd-11ea-a994-fa163e34d433,ResourceVersion:22313236,Generation:2,CreationTimestamp:2020-02-20 12:38:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-20 12:38:05 +0000 UTC 2020-02-20 12:38:05 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-20 12:38:28 +0000 UTC 2020-02-20 12:38:05 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 20 12:38:28.732: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-kgxzl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kgxzl/replicasets/test-rollover-deployment-5b8479fdb6,UID:d8e5fadc-53dd-11ea-a994-fa163e34d433,ResourceVersion:22313227,Generation:2,CreationTimestamp:2020-02-20 12:38:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d7584b27-53dd-11ea-a994-fa163e34d433 0xc001947967 0xc001947968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 20 12:38:28.732: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 20 12:38:28.732: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-kgxzl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kgxzl/replicasets/test-rollover-controller,UID:cf899740-53dd-11ea-a994-fa163e34d433,ResourceVersion:22313235,Generation:2,CreationTimestamp:2020-02-20 12:37:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d7584b27-53dd-11ea-a994-fa163e34d433 0xc0019477d7 0xc0019477d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 12:38:28.733: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-kgxzl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kgxzl/replicasets/test-rollover-deployment-58494b7559,UID:d75f6e2e-53dd-11ea-a994-fa163e34d433,ResourceVersion:22313192,Generation:2,CreationTimestamp:2020-02-20 12:38:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d7584b27-53dd-11ea-a994-fa163e34d433 0xc001947897 0xc001947898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 20 12:38:28.741: INFO: Pod "test-rollover-deployment-5b8479fdb6-jpc4r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-jpc4r,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-kgxzl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kgxzl/pods/test-rollover-deployment-5b8479fdb6-jpc4r,UID:d90caeb6-53dd-11ea-a994-fa163e34d433,ResourceVersion:22313211,Generation:0,CreationTimestamp:2020-02-20 12:38:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 d8e5fadc-53dd-11ea-a994-fa163e34d433 0xc000d135a7 0xc000d135a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h4znq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h4znq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-h4znq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d136a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d136c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 12:38:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 12:38:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 12:38:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 12:38:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-20 12:38:08 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-20 12:38:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f68fda67465c61b1347385bac13bb16551ffcb7605be1a17bb79558657e5864b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:38:28.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-kgxzl" for this suite. Feb 20 12:38:38.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:38:38.941: INFO: namespace: e2e-tests-deployment-kgxzl, resource: bindings, ignored listing per whitelist Feb 20 12:38:38.959: INFO: namespace e2e-tests-deployment-kgxzl deletion completed in 10.208788943s • [SLOW TEST:46.839 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:38:38.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 20 12:38:39.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-nlr9n' Feb 20 12:38:39.325: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 20 12:38:39.325: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Feb 20 12:38:43.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-nlr9n' Feb 20 12:38:43.846: INFO: stderr: "" Feb 20 12:38:43.847: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:38:43.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nlr9n" for this suite. Feb 20 12:38:49.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:38:50.072: INFO: namespace: e2e-tests-kubectl-nlr9n, resource: bindings, ignored listing per whitelist Feb 20 12:38:50.140: INFO: namespace e2e-tests-kubectl-nlr9n deletion completed in 6.27839218s • [SLOW TEST:11.180 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:38:50.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-qq95k.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-qq95k.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-qq95k.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-qq95k.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-qq95k.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-qq95k.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 20 12:39:06.689: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.697: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.707: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.714: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.721: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.727: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.734: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-qq95k.svc.cluster.local from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.739: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.744: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.748: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.752: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.756: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.759: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.763: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.766: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.771: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.774: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-qq95k.svc.cluster.local from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.778: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.782: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.786: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008: the server could not find the requested resource (get pods dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008) Feb 20 12:39:06.786: INFO: Lookups using e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-qq95k.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-qq95k.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 20 12:39:12.004: INFO: DNS probes using e2e-tests-dns-qq95k/dns-test-f22c3b95-53dd-11ea-bcb7-0242ac110008 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:39:12.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-qq95k" for this suite. Feb 20 12:39:20.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:39:20.412: INFO: namespace: e2e-tests-dns-qq95k, resource: bindings, ignored listing per whitelist Feb 20 12:39:20.485: INFO: namespace e2e-tests-dns-qq95k deletion completed in 8.289734254s • [SLOW TEST:30.346 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:39:20.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 12:39:20.985: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:39:31.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-sdznd" for this suite. Feb 20 12:40:25.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:40:25.578: INFO: namespace: e2e-tests-pods-sdznd, resource: bindings, ignored listing per whitelist Feb 20 12:40:25.748: INFO: namespace e2e-tests-pods-sdznd deletion completed in 54.249804579s • [SLOW TEST:65.262 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:40:25.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-2b46c069-53de-11ea-bcb7-0242ac110008 STEP: Creating a pod to test consume secrets Feb 20 12:40:26.245: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2b47fbda-53de-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-vpr7q" to be "success or failure" Feb 20 12:40:26.266: INFO: Pod "pod-projected-secrets-2b47fbda-53de-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.149476ms Feb 20 12:40:28.283: INFO: Pod "pod-projected-secrets-2b47fbda-53de-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038200554s Feb 20 12:40:30.306: INFO: Pod "pod-projected-secrets-2b47fbda-53de-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06166694s Feb 20 12:40:32.971: INFO: Pod "pod-projected-secrets-2b47fbda-53de-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.72608489s Feb 20 12:40:35.061: INFO: Pod "pod-projected-secrets-2b47fbda-53de-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.816743324s Feb 20 12:40:37.081: INFO: Pod "pod-projected-secrets-2b47fbda-53de-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.835973417s STEP: Saw pod success Feb 20 12:40:37.081: INFO: Pod "pod-projected-secrets-2b47fbda-53de-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:40:37.086: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2b47fbda-53de-11ea-bcb7-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 20 12:40:37.239: INFO: Waiting for pod pod-projected-secrets-2b47fbda-53de-11ea-bcb7-0242ac110008 to disappear Feb 20 12:40:37.253: INFO: Pod pod-projected-secrets-2b47fbda-53de-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:40:37.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vpr7q" for this suite. Feb 20 12:40:43.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:40:43.459: INFO: namespace: e2e-tests-projected-vpr7q, resource: bindings, ignored listing per whitelist Feb 20 12:40:43.497: INFO: namespace e2e-tests-projected-vpr7q deletion completed in 6.23943184s • [SLOW TEST:17.749 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:40:43.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 20 12:40:56.453: INFO: Successfully updated pod "labelsupdate35baafd4-53de-11ea-bcb7-0242ac110008" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:40:58.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-m7sqf" for this suite. Feb 20 12:41:24.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:41:24.885: INFO: namespace: e2e-tests-downward-api-m7sqf, resource: bindings, ignored listing per whitelist Feb 20 12:41:24.979: INFO: namespace e2e-tests-downward-api-m7sqf deletion completed in 26.302688072s • [SLOW TEST:41.482 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:41:24.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-nfnph Feb 20 12:41:37.325: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-nfnph STEP: checking the pod's current state and verifying that restartCount is present Feb 20 12:41:37.334: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:45:39.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-nfnph" for this suite. Feb 20 12:45:45.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:45:45.299: INFO: namespace: e2e-tests-container-probe-nfnph, resource: bindings, ignored listing per whitelist Feb 20 12:45:45.330: INFO: namespace e2e-tests-container-probe-nfnph deletion completed in 6.274928111s • [SLOW TEST:260.350 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:45:45.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0220 12:46:27.497642 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 20 12:46:27.497: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:46:27.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-97kfx" for this suite. Feb 20 12:46:38.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:46:38.636: INFO: namespace: e2e-tests-gc-97kfx, resource: bindings, ignored listing per whitelist Feb 20 12:46:38.725: INFO: namespace e2e-tests-gc-97kfx deletion completed in 11.222583594s • [SLOW TEST:53.396 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:46:38.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Feb 20 12:47:03.443: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-098fef17-53df-11ea-bcb7-0242ac110008", GenerateName:"", Namespace:"e2e-tests-pods-kzpkj", SelfLink:"/api/v1/namespaces/e2e-tests-pods-kzpkj/pods/pod-submit-remove-098fef17-53df-11ea-bcb7-0242ac110008", UID:"099a7bd3-53df-11ea-a994-fa163e34d433", ResourceVersion:"22314223", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717799599, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"146191802"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8ffxh", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00192ac80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8ffxh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001fee018), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001278720), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fee060)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fee080)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001fee088), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001fee08c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799599, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799621, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799621, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717799599, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000b1b960), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000b1b980), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://5488bf26d87807faba937eaa0886143620069dcdfbd796a6fb8e312efd9244c5"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:47:09.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-kzpkj" for this suite. Feb 20 12:47:15.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:47:15.316: INFO: namespace: e2e-tests-pods-kzpkj, resource: bindings, ignored listing per whitelist Feb 20 12:47:15.371: INFO: namespace e2e-tests-pods-kzpkj deletion completed in 6.336078212s • [SLOW TEST:36.646 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:47:15.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 20 12:47:15.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f446bb2-53df-11ea-bcb7-0242ac110008" in namespace "e2e-tests-downward-api-6lbhv" to be "success or failure" Feb 20 12:47:15.614: INFO: Pod "downwardapi-volume-1f446bb2-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 37.06273ms Feb 20 12:47:17.630: INFO: Pod "downwardapi-volume-1f446bb2-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053691053s Feb 20 12:47:19.651: INFO: Pod "downwardapi-volume-1f446bb2-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074614784s Feb 20 12:47:21.834: INFO: Pod "downwardapi-volume-1f446bb2-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257005725s Feb 20 12:47:23.992: INFO: Pod "downwardapi-volume-1f446bb2-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.415426457s Feb 20 12:47:26.388: INFO: Pod "downwardapi-volume-1f446bb2-53df-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.811739236s STEP: Saw pod success Feb 20 12:47:26.389: INFO: Pod "downwardapi-volume-1f446bb2-53df-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:47:26.408: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1f446bb2-53df-11ea-bcb7-0242ac110008 container client-container: STEP: delete the pod Feb 20 12:47:26.846: INFO: Waiting for pod downwardapi-volume-1f446bb2-53df-11ea-bcb7-0242ac110008 to disappear Feb 20 12:47:26.877: INFO: Pod downwardapi-volume-1f446bb2-53df-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:47:26.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6lbhv" for this suite. Feb 20 12:47:33.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:47:33.052: INFO: namespace: e2e-tests-downward-api-6lbhv, resource: bindings, ignored listing per whitelist Feb 20 12:47:33.150: INFO: namespace e2e-tests-downward-api-6lbhv deletion completed in 6.258158786s • [SLOW TEST:17.778 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:47:33.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-29f97a0d-53df-11ea-bcb7-0242ac110008 STEP: Creating secret with name secret-projected-all-test-volume-29f979e1-53df-11ea-bcb7-0242ac110008 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 20 12:47:33.554: INFO: Waiting up to 5m0s for pod "projected-volume-29f9796e-53df-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-pb8nq" to be "success or failure" Feb 20 12:47:33.560: INFO: Pod "projected-volume-29f9796e-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.664824ms Feb 20 12:47:35.684: INFO: Pod "projected-volume-29f9796e-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129245276s Feb 20 12:47:37.693: INFO: Pod "projected-volume-29f9796e-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138867747s Feb 20 12:47:39.970: INFO: Pod "projected-volume-29f9796e-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415956594s Feb 20 12:47:42.079: INFO: Pod "projected-volume-29f9796e-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.524511613s Feb 20 12:47:44.089: INFO: Pod "projected-volume-29f9796e-53df-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.534669021s STEP: Saw pod success Feb 20 12:47:44.089: INFO: Pod "projected-volume-29f9796e-53df-11ea-bcb7-0242ac110008" satisfied condition "success or failure" Feb 20 12:47:44.093: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-29f9796e-53df-11ea-bcb7-0242ac110008 container projected-all-volume-test: STEP: delete the pod Feb 20 12:47:45.038: INFO: Waiting for pod projected-volume-29f9796e-53df-11ea-bcb7-0242ac110008 to disappear Feb 20 12:47:45.103: INFO: Pod projected-volume-29f9796e-53df-11ea-bcb7-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:47:45.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pb8nq" for this suite. Feb 20 12:47:51.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:47:51.379: INFO: namespace: e2e-tests-projected-pb8nq, resource: bindings, ignored listing per whitelist Feb 20 12:47:51.552: INFO: namespace e2e-tests-projected-pb8nq deletion completed in 6.429542441s • [SLOW TEST:18.402 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:47:51.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 20 12:47:52.496: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 20 12:47:57.516: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 20 12:47:59.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-9m98k" for this suite. Feb 20 12:48:12.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 20 12:48:12.583: INFO: namespace: e2e-tests-replication-controller-9m98k, resource: bindings, ignored listing per whitelist Feb 20 12:48:12.665: INFO: namespace e2e-tests-replication-controller-9m98k deletion completed in 12.707873638s • [SLOW TEST:21.112 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 20 12:48:12.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 20 12:48:13.232: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 19.842253ms)
Feb 20 12:48:13.239: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.082115ms)
Feb 20 12:48:13.244: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.330812ms)
Feb 20 12:48:13.250: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.318449ms)
Feb 20 12:48:13.258: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.32607ms)
Feb 20 12:48:13.265: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.848125ms)
Feb 20 12:48:13.270: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.518827ms)
Feb 20 12:48:13.274: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.041055ms)
Feb 20 12:48:13.278: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.80126ms)
Feb 20 12:48:13.284: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.140476ms)
Feb 20 12:48:13.345: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 61.713048ms)
Feb 20 12:48:13.357: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.924253ms)
Feb 20 12:48:13.367: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.479679ms)
Feb 20 12:48:13.373: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.715437ms)
Feb 20 12:48:13.378: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.14022ms)
Feb 20 12:48:13.383: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.756619ms)
Feb 20 12:48:13.387: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.43197ms)
Feb 20 12:48:13.391: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.06981ms)
Feb 20 12:48:13.395: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.210185ms)
Feb 20 12:48:13.399: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.92508ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:48:13.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-hvtpg" for this suite.
Feb 20 12:48:19.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:48:19.549: INFO: namespace: e2e-tests-proxy-hvtpg, resource: bindings, ignored listing per whitelist
Feb 20 12:48:19.597: INFO: namespace e2e-tests-proxy-hvtpg deletion completed in 6.192394786s

• [SLOW TEST:6.931 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:48:19.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 20 12:48:19.961: INFO: Waiting up to 5m0s for pod "pod-4583331c-53df-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-gc5hz" to be "success or failure"
Feb 20 12:48:19.972: INFO: Pod "pod-4583331c-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.273606ms
Feb 20 12:48:22.209: INFO: Pod "pod-4583331c-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248065445s
Feb 20 12:48:24.219: INFO: Pod "pod-4583331c-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258499902s
Feb 20 12:48:26.333: INFO: Pod "pod-4583331c-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.371887758s
Feb 20 12:48:28.347: INFO: Pod "pod-4583331c-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.386121476s
Feb 20 12:48:30.370: INFO: Pod "pod-4583331c-53df-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.40960435s
STEP: Saw pod success
Feb 20 12:48:30.371: INFO: Pod "pod-4583331c-53df-11ea-bcb7-0242ac110008" satisfied condition "success or failure"
Feb 20 12:48:30.377: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4583331c-53df-11ea-bcb7-0242ac110008 container test-container: 
STEP: delete the pod
Feb 20 12:48:30.691: INFO: Waiting for pod pod-4583331c-53df-11ea-bcb7-0242ac110008 to disappear
Feb 20 12:48:30.710: INFO: Pod pod-4583331c-53df-11ea-bcb7-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:48:30.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gc5hz" for this suite.
Feb 20 12:48:37.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:48:37.959: INFO: namespace: e2e-tests-emptydir-gc5hz, resource: bindings, ignored listing per whitelist
Feb 20 12:48:38.117: INFO: namespace e2e-tests-emptydir-gc5hz deletion completed in 7.381502998s

• [SLOW TEST:18.519 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:48:38.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-509b377f-53df-11ea-bcb7-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 20 12:48:38.378: INFO: Waiting up to 5m0s for pod "pod-secrets-509c577a-53df-11ea-bcb7-0242ac110008" in namespace "e2e-tests-secrets-4zc4f" to be "success or failure"
Feb 20 12:48:38.390: INFO: Pod "pod-secrets-509c577a-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.391005ms
Feb 20 12:48:40.407: INFO: Pod "pod-secrets-509c577a-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029724455s
Feb 20 12:48:42.430: INFO: Pod "pod-secrets-509c577a-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052293325s
Feb 20 12:48:44.450: INFO: Pod "pod-secrets-509c577a-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072050703s
Feb 20 12:48:46.476: INFO: Pod "pod-secrets-509c577a-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097980453s
Feb 20 12:48:48.499: INFO: Pod "pod-secrets-509c577a-53df-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.121661681s
STEP: Saw pod success
Feb 20 12:48:48.499: INFO: Pod "pod-secrets-509c577a-53df-11ea-bcb7-0242ac110008" satisfied condition "success or failure"
Feb 20 12:48:48.511: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-509c577a-53df-11ea-bcb7-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 20 12:48:48.619: INFO: Waiting for pod pod-secrets-509c577a-53df-11ea-bcb7-0242ac110008 to disappear
Feb 20 12:48:48.675: INFO: Pod pod-secrets-509c577a-53df-11ea-bcb7-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:48:48.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4zc4f" for this suite.
Feb 20 12:48:56.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:48:56.862: INFO: namespace: e2e-tests-secrets-4zc4f, resource: bindings, ignored listing per whitelist
Feb 20 12:48:56.890: INFO: namespace e2e-tests-secrets-4zc4f deletion completed in 8.199736681s

• [SLOW TEST:18.773 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:48:56.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-5bc53c21-53df-11ea-bcb7-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 20 12:48:57.098: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5bc88909-53df-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-sw2wm" to be "success or failure"
Feb 20 12:48:57.105: INFO: Pod "pod-projected-configmaps-5bc88909-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.287184ms
Feb 20 12:48:59.123: INFO: Pod "pod-projected-configmaps-5bc88909-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025240017s
Feb 20 12:49:01.154: INFO: Pod "pod-projected-configmaps-5bc88909-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056364005s
Feb 20 12:49:04.429: INFO: Pod "pod-projected-configmaps-5bc88909-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.331622873s
Feb 20 12:49:06.450: INFO: Pod "pod-projected-configmaps-5bc88909-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.352221785s
Feb 20 12:49:08.538: INFO: Pod "pod-projected-configmaps-5bc88909-53df-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.440220833s
STEP: Saw pod success
Feb 20 12:49:08.538: INFO: Pod "pod-projected-configmaps-5bc88909-53df-11ea-bcb7-0242ac110008" satisfied condition "success or failure"
Feb 20 12:49:08.567: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-5bc88909-53df-11ea-bcb7-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 20 12:49:08.904: INFO: Waiting for pod pod-projected-configmaps-5bc88909-53df-11ea-bcb7-0242ac110008 to disappear
Feb 20 12:49:08.931: INFO: Pod pod-projected-configmaps-5bc88909-53df-11ea-bcb7-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:49:08.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sw2wm" for this suite.
Feb 20 12:49:15.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:49:15.404: INFO: namespace: e2e-tests-projected-sw2wm, resource: bindings, ignored listing per whitelist
Feb 20 12:49:15.440: INFO: namespace e2e-tests-projected-sw2wm deletion completed in 6.498527051s

• [SLOW TEST:18.550 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:49:15.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 20 12:49:15.594: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.682355ms)
Feb 20 12:49:15.658: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 63.941326ms)
Feb 20 12:49:15.667: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.484452ms)
Feb 20 12:49:15.679: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.886384ms)
Feb 20 12:49:15.686: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.791388ms)
Feb 20 12:49:15.701: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.968598ms)
Feb 20 12:49:15.710: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.335049ms)
Feb 20 12:49:15.716: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.257959ms)
Feb 20 12:49:15.722: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.996396ms)
Feb 20 12:49:15.730: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.650411ms)
Feb 20 12:49:15.736: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.986069ms)
Feb 20 12:49:15.742: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.729248ms)
Feb 20 12:49:15.749: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.089064ms)
Feb 20 12:49:15.756: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.007618ms)
Feb 20 12:49:15.765: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.215409ms)
Feb 20 12:49:15.776: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.881754ms)
Feb 20 12:49:15.785: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.989926ms)
Feb 20 12:49:15.791: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.479043ms)
Feb 20 12:49:15.799: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.588693ms)
Feb 20 12:49:15.806: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.627304ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:49:15.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-7gxbb" for this suite.
Feb 20 12:49:21.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:49:22.034: INFO: namespace: e2e-tests-proxy-7gxbb, resource: bindings, ignored listing per whitelist
Feb 20 12:49:22.042: INFO: namespace e2e-tests-proxy-7gxbb deletion completed in 6.230740155s

• [SLOW TEST:6.602 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:49:22.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 20 12:49:22.256: INFO: Creating ReplicaSet my-hostname-basic-6ac88fdc-53df-11ea-bcb7-0242ac110008
Feb 20 12:49:22.283: INFO: Pod name my-hostname-basic-6ac88fdc-53df-11ea-bcb7-0242ac110008: Found 0 pods out of 1
Feb 20 12:49:27.742: INFO: Pod name my-hostname-basic-6ac88fdc-53df-11ea-bcb7-0242ac110008: Found 1 pods out of 1
Feb 20 12:49:27.742: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-6ac88fdc-53df-11ea-bcb7-0242ac110008" is running
Feb 20 12:49:31.767: INFO: Pod "my-hostname-basic-6ac88fdc-53df-11ea-bcb7-0242ac110008-jm69v" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 12:49:22 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 12:49:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6ac88fdc-53df-11ea-bcb7-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 12:49:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6ac88fdc-53df-11ea-bcb7-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-20 12:49:22 +0000 UTC Reason: Message:}])
Feb 20 12:49:31.767: INFO: Trying to dial the pod
Feb 20 12:49:36.813: INFO: Controller my-hostname-basic-6ac88fdc-53df-11ea-bcb7-0242ac110008: Got expected result from replica 1 [my-hostname-basic-6ac88fdc-53df-11ea-bcb7-0242ac110008-jm69v]: "my-hostname-basic-6ac88fdc-53df-11ea-bcb7-0242ac110008-jm69v", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:49:36.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-f7fnt" for this suite.
Feb 20 12:49:44.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:49:44.944: INFO: namespace: e2e-tests-replicaset-f7fnt, resource: bindings, ignored listing per whitelist
Feb 20 12:49:45.046: INFO: namespace e2e-tests-replicaset-f7fnt deletion completed in 8.22231967s

• [SLOW TEST:23.003 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:49:45.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb 20 12:49:45.259: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix643089935/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:49:45.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-t57xn" for this suite.
Feb 20 12:49:51.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:49:51.614: INFO: namespace: e2e-tests-kubectl-t57xn, resource: bindings, ignored listing per whitelist
Feb 20 12:49:51.660: INFO: namespace e2e-tests-kubectl-t57xn deletion completed in 6.263879249s

• [SLOW TEST:6.613 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:49:51.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 20 12:49:51.879: INFO: Waiting up to 5m0s for pod "pod-7c6e859f-53df-11ea-bcb7-0242ac110008" in namespace "e2e-tests-emptydir-hcd5h" to be "success or failure"
Feb 20 12:49:51.890: INFO: Pod "pod-7c6e859f-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.55143ms
Feb 20 12:49:54.166: INFO: Pod "pod-7c6e859f-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286471905s
Feb 20 12:49:56.185: INFO: Pod "pod-7c6e859f-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306272585s
Feb 20 12:49:58.203: INFO: Pod "pod-7c6e859f-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323747349s
Feb 20 12:50:00.400: INFO: Pod "pod-7c6e859f-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52116701s
Feb 20 12:50:02.503: INFO: Pod "pod-7c6e859f-53df-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.623606955s
STEP: Saw pod success
Feb 20 12:50:02.503: INFO: Pod "pod-7c6e859f-53df-11ea-bcb7-0242ac110008" satisfied condition "success or failure"
Feb 20 12:50:02.558: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7c6e859f-53df-11ea-bcb7-0242ac110008 container test-container: 
STEP: delete the pod
Feb 20 12:50:02.696: INFO: Waiting for pod pod-7c6e859f-53df-11ea-bcb7-0242ac110008 to disappear
Feb 20 12:50:02.718: INFO: Pod pod-7c6e859f-53df-11ea-bcb7-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:50:02.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hcd5h" for this suite.
Feb 20 12:50:08.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:50:08.927: INFO: namespace: e2e-tests-emptydir-hcd5h, resource: bindings, ignored listing per whitelist
Feb 20 12:50:09.009: INFO: namespace e2e-tests-emptydir-hcd5h deletion completed in 6.2765999s

• [SLOW TEST:17.349 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:50:09.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-86d752c2-53df-11ea-bcb7-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 20 12:50:09.348: INFO: Waiting up to 5m0s for pod "pod-configmaps-86d87d7b-53df-11ea-bcb7-0242ac110008" in namespace "e2e-tests-configmap-mdpjs" to be "success or failure"
Feb 20 12:50:09.368: INFO: Pod "pod-configmaps-86d87d7b-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.554152ms
Feb 20 12:50:11.446: INFO: Pod "pod-configmaps-86d87d7b-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097932455s
Feb 20 12:50:13.471: INFO: Pod "pod-configmaps-86d87d7b-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122649583s
Feb 20 12:50:15.730: INFO: Pod "pod-configmaps-86d87d7b-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38159524s
Feb 20 12:50:17.744: INFO: Pod "pod-configmaps-86d87d7b-53df-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.395638021s
Feb 20 12:50:19.758: INFO: Pod "pod-configmaps-86d87d7b-53df-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.409744053s
STEP: Saw pod success
Feb 20 12:50:19.758: INFO: Pod "pod-configmaps-86d87d7b-53df-11ea-bcb7-0242ac110008" satisfied condition "success or failure"
Feb 20 12:50:19.765: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-86d87d7b-53df-11ea-bcb7-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 20 12:50:21.004: INFO: Waiting for pod pod-configmaps-86d87d7b-53df-11ea-bcb7-0242ac110008 to disappear
Feb 20 12:50:21.063: INFO: Pod pod-configmaps-86d87d7b-53df-11ea-bcb7-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:50:21.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mdpjs" for this suite.
Feb 20 12:50:27.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:50:27.237: INFO: namespace: e2e-tests-configmap-mdpjs, resource: bindings, ignored listing per whitelist
Feb 20 12:50:27.313: INFO: namespace e2e-tests-configmap-mdpjs deletion completed in 6.239351341s

• [SLOW TEST:18.304 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:50:27.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-27zps
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Feb 20 12:50:27.736: INFO: Found 0 stateful pods, waiting for 3
Feb 20 12:50:37.749: INFO: Found 2 stateful pods, waiting for 3
Feb 20 12:50:47.771: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 12:50:47.771: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 12:50:47.771: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 20 12:50:57.814: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 12:50:57.814: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 12:50:57.814: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Feb 20 12:51:07.757: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 12:51:07.757: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 12:51:07.757: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 12:51:07.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-27zps ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 12:51:08.440: INFO: stderr: "I0220 12:51:07.983739    3630 log.go:172] (0xc00013a160) (0xc000596280) Create stream\nI0220 12:51:07.983909    3630 log.go:172] (0xc00013a160) (0xc000596280) Stream added, broadcasting: 1\nI0220 12:51:07.990273    3630 log.go:172] (0xc00013a160) Reply frame received for 1\nI0220 12:51:07.990310    3630 log.go:172] (0xc00013a160) (0xc0004faaa0) Create stream\nI0220 12:51:07.990322    3630 log.go:172] (0xc00013a160) (0xc0004faaa0) Stream added, broadcasting: 3\nI0220 12:51:07.991476    3630 log.go:172] (0xc00013a160) Reply frame received for 3\nI0220 12:51:07.991505    3630 log.go:172] (0xc00013a160) (0xc000222000) Create stream\nI0220 12:51:07.991515    3630 log.go:172] (0xc00013a160) (0xc000222000) Stream added, broadcasting: 5\nI0220 12:51:07.995972    3630 log.go:172] (0xc00013a160) Reply frame received for 5\nI0220 12:51:08.273088    3630 log.go:172] (0xc00013a160) Data frame received for 3\nI0220 12:51:08.273161    3630 log.go:172] (0xc0004faaa0) (3) Data frame handling\nI0220 12:51:08.273175    3630 log.go:172] (0xc0004faaa0) (3) Data frame sent\nI0220 12:51:08.429828    3630 log.go:172] (0xc00013a160) (0xc0004faaa0) Stream removed, broadcasting: 3\nI0220 12:51:08.430164    3630 log.go:172] (0xc00013a160) Data frame received for 1\nI0220 12:51:08.430185    3630 log.go:172] (0xc000596280) (1) Data frame handling\nI0220 12:51:08.430203    3630 log.go:172] (0xc000596280) (1) Data frame sent\nI0220 12:51:08.430219    3630 log.go:172] (0xc00013a160) (0xc000596280) Stream removed, broadcasting: 1\nI0220 12:51:08.430749    3630 log.go:172] (0xc00013a160) (0xc000222000) Stream removed, broadcasting: 5\nI0220 12:51:08.430789    3630 log.go:172] (0xc00013a160) (0xc000596280) Stream removed, broadcasting: 1\nI0220 12:51:08.430807    3630 log.go:172] (0xc00013a160) (0xc0004faaa0) Stream removed, broadcasting: 3\nI0220 12:51:08.430867    3630 log.go:172] (0xc00013a160) (0xc000222000) Stream removed, broadcasting: 5\nI0220 12:51:08.431017    3630 log.go:172] (0xc00013a160) Go away received\n"
Feb 20 12:51:08.440: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 12:51:08.440: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 20 12:51:18.695: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 20 12:51:28.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-27zps ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 12:51:29.300: INFO: stderr: "I0220 12:51:29.009232    3652 log.go:172] (0xc0007042c0) (0xc0007246e0) Create stream\nI0220 12:51:29.009450    3652 log.go:172] (0xc0007042c0) (0xc0007246e0) Stream added, broadcasting: 1\nI0220 12:51:29.017229    3652 log.go:172] (0xc0007042c0) Reply frame received for 1\nI0220 12:51:29.017273    3652 log.go:172] (0xc0007042c0) (0xc0001f8460) Create stream\nI0220 12:51:29.017284    3652 log.go:172] (0xc0007042c0) (0xc0001f8460) Stream added, broadcasting: 3\nI0220 12:51:29.019129    3652 log.go:172] (0xc0007042c0) Reply frame received for 3\nI0220 12:51:29.019172    3652 log.go:172] (0xc0007042c0) (0xc000724780) Create stream\nI0220 12:51:29.019197    3652 log.go:172] (0xc0007042c0) (0xc000724780) Stream added, broadcasting: 5\nI0220 12:51:29.020548    3652 log.go:172] (0xc0007042c0) Reply frame received for 5\nI0220 12:51:29.190726    3652 log.go:172] (0xc0007042c0) Data frame received for 3\nI0220 12:51:29.190783    3652 log.go:172] (0xc0001f8460) (3) Data frame handling\nI0220 12:51:29.190801    3652 log.go:172] (0xc0001f8460) (3) Data frame sent\nI0220 12:51:29.293675    3652 log.go:172] (0xc0007042c0) (0xc0001f8460) Stream removed, broadcasting: 3\nI0220 12:51:29.293853    3652 log.go:172] (0xc0007042c0) Data frame received for 1\nI0220 12:51:29.293883    3652 log.go:172] (0xc0007042c0) (0xc000724780) Stream removed, broadcasting: 5\nI0220 12:51:29.293910    3652 log.go:172] (0xc0007246e0) (1) Data frame handling\nI0220 12:51:29.293931    3652 log.go:172] (0xc0007246e0) (1) Data frame sent\nI0220 12:51:29.293948    3652 log.go:172] (0xc0007042c0) (0xc0007246e0) Stream removed, broadcasting: 1\nI0220 12:51:29.293962    3652 log.go:172] (0xc0007042c0) Go away received\nI0220 12:51:29.294225    3652 log.go:172] (0xc0007042c0) (0xc0007246e0) Stream removed, broadcasting: 1\nI0220 12:51:29.294243    3652 log.go:172] (0xc0007042c0) (0xc0001f8460) Stream removed, broadcasting: 3\nI0220 12:51:29.294247    3652 log.go:172] (0xc0007042c0) (0xc000724780) Stream removed, broadcasting: 5\n"
Feb 20 12:51:29.300: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 20 12:51:29.300: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 20 12:51:40.673: INFO: Waiting for StatefulSet e2e-tests-statefulset-27zps/ss2 to complete update
Feb 20 12:51:40.673: INFO: Waiting for Pod e2e-tests-statefulset-27zps/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 20 12:51:40.673: INFO: Waiting for Pod e2e-tests-statefulset-27zps/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 20 12:51:50.699: INFO: Waiting for StatefulSet e2e-tests-statefulset-27zps/ss2 to complete update
Feb 20 12:51:50.699: INFO: Waiting for Pod e2e-tests-statefulset-27zps/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 20 12:51:50.699: INFO: Waiting for Pod e2e-tests-statefulset-27zps/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 20 12:52:00.709: INFO: Waiting for StatefulSet e2e-tests-statefulset-27zps/ss2 to complete update
Feb 20 12:52:00.709: INFO: Waiting for Pod e2e-tests-statefulset-27zps/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 20 12:52:10.713: INFO: Waiting for StatefulSet e2e-tests-statefulset-27zps/ss2 to complete update
Feb 20 12:52:10.713: INFO: Waiting for Pod e2e-tests-statefulset-27zps/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 20 12:52:20.709: INFO: Waiting for StatefulSet e2e-tests-statefulset-27zps/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 20 12:52:30.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-27zps ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 12:52:31.434: INFO: stderr: "I0220 12:52:30.993420    3673 log.go:172] (0xc00014c6e0) (0xc0006e4640) Create stream\nI0220 12:52:30.993560    3673 log.go:172] (0xc00014c6e0) (0xc0006e4640) Stream added, broadcasting: 1\nI0220 12:52:31.002049    3673 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0220 12:52:31.002074    3673 log.go:172] (0xc00014c6e0) (0xc00059ed20) Create stream\nI0220 12:52:31.002080    3673 log.go:172] (0xc00014c6e0) (0xc00059ed20) Stream added, broadcasting: 3\nI0220 12:52:31.003152    3673 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0220 12:52:31.003169    3673 log.go:172] (0xc00014c6e0) (0xc000370000) Create stream\nI0220 12:52:31.003175    3673 log.go:172] (0xc00014c6e0) (0xc000370000) Stream added, broadcasting: 5\nI0220 12:52:31.004257    3673 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0220 12:52:31.253665    3673 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0220 12:52:31.253706    3673 log.go:172] (0xc00059ed20) (3) Data frame handling\nI0220 12:52:31.253719    3673 log.go:172] (0xc00059ed20) (3) Data frame sent\nI0220 12:52:31.426894    3673 log.go:172] (0xc00014c6e0) (0xc00059ed20) Stream removed, broadcasting: 3\nI0220 12:52:31.427362    3673 log.go:172] (0xc00014c6e0) (0xc000370000) Stream removed, broadcasting: 5\nI0220 12:52:31.427474    3673 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0220 12:52:31.427526    3673 log.go:172] (0xc0006e4640) (1) Data frame handling\nI0220 12:52:31.427599    3673 log.go:172] (0xc0006e4640) (1) Data frame sent\nI0220 12:52:31.427618    3673 log.go:172] (0xc00014c6e0) (0xc0006e4640) Stream removed, broadcasting: 1\nI0220 12:52:31.427634    3673 log.go:172] (0xc00014c6e0) Go away received\nI0220 12:52:31.428180    3673 log.go:172] (0xc00014c6e0) (0xc0006e4640) Stream removed, broadcasting: 1\nI0220 12:52:31.428236    3673 log.go:172] (0xc00014c6e0) (0xc00059ed20) Stream removed, broadcasting: 3\nI0220 12:52:31.428250    3673 log.go:172] (0xc00014c6e0) (0xc000370000) Stream removed, broadcasting: 5\n"
Feb 20 12:52:31.435: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 12:52:31.435: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 12:52:41.572: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 20 12:52:51.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-27zps ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 12:52:52.305: INFO: stderr: "I0220 12:52:51.938485    3695 log.go:172] (0xc00015c840) (0xc00062f2c0) Create stream\nI0220 12:52:51.938697    3695 log.go:172] (0xc00015c840) (0xc00062f2c0) Stream added, broadcasting: 1\nI0220 12:52:51.944441    3695 log.go:172] (0xc00015c840) Reply frame received for 1\nI0220 12:52:51.944478    3695 log.go:172] (0xc00015c840) (0xc000744000) Create stream\nI0220 12:52:51.944488    3695 log.go:172] (0xc00015c840) (0xc000744000) Stream added, broadcasting: 3\nI0220 12:52:51.945566    3695 log.go:172] (0xc00015c840) Reply frame received for 3\nI0220 12:52:51.945587    3695 log.go:172] (0xc00015c840) (0xc00062f360) Create stream\nI0220 12:52:51.945594    3695 log.go:172] (0xc00015c840) (0xc00062f360) Stream added, broadcasting: 5\nI0220 12:52:51.946406    3695 log.go:172] (0xc00015c840) Reply frame received for 5\nI0220 12:52:52.091113    3695 log.go:172] (0xc00015c840) Data frame received for 3\nI0220 12:52:52.091234    3695 log.go:172] (0xc000744000) (3) Data frame handling\nI0220 12:52:52.091254    3695 log.go:172] (0xc000744000) (3) Data frame sent\nI0220 12:52:52.294351    3695 log.go:172] (0xc00015c840) Data frame received for 1\nI0220 12:52:52.294666    3695 log.go:172] (0xc00015c840) (0xc000744000) Stream removed, broadcasting: 3\nI0220 12:52:52.294812    3695 log.go:172] (0xc00062f2c0) (1) Data frame handling\nI0220 12:52:52.294969    3695 log.go:172] (0xc00015c840) (0xc00062f360) Stream removed, broadcasting: 5\nI0220 12:52:52.295045    3695 log.go:172] (0xc00062f2c0) (1) Data frame sent\nI0220 12:52:52.295110    3695 log.go:172] (0xc00015c840) (0xc00062f2c0) Stream removed, broadcasting: 1\nI0220 12:52:52.295154    3695 log.go:172] (0xc00015c840) Go away received\nI0220 12:52:52.295301    3695 log.go:172] (0xc00015c840) (0xc00062f2c0) Stream removed, broadcasting: 1\nI0220 12:52:52.295319    3695 log.go:172] (0xc00015c840) (0xc000744000) Stream removed, broadcasting: 3\nI0220 12:52:52.295324    3695 log.go:172] (0xc00015c840) (0xc00062f360) Stream removed, broadcasting: 5\n"
Feb 20 12:52:52.305: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 20 12:52:52.305: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 20 12:53:02.391: INFO: Waiting for StatefulSet e2e-tests-statefulset-27zps/ss2 to complete update
Feb 20 12:53:02.391: INFO: Waiting for Pod e2e-tests-statefulset-27zps/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 20 12:53:02.391: INFO: Waiting for Pod e2e-tests-statefulset-27zps/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 20 12:53:12.419: INFO: Waiting for StatefulSet e2e-tests-statefulset-27zps/ss2 to complete update
Feb 20 12:53:12.420: INFO: Waiting for Pod e2e-tests-statefulset-27zps/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 20 12:53:12.420: INFO: Waiting for Pod e2e-tests-statefulset-27zps/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 20 12:53:22.537: INFO: Waiting for StatefulSet e2e-tests-statefulset-27zps/ss2 to complete update
Feb 20 12:53:22.537: INFO: Waiting for Pod e2e-tests-statefulset-27zps/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 20 12:53:32.417: INFO: Waiting for StatefulSet e2e-tests-statefulset-27zps/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 20 12:53:42.441: INFO: Deleting all statefulset in ns e2e-tests-statefulset-27zps
Feb 20 12:53:42.501: INFO: Scaling statefulset ss2 to 0
Feb 20 12:54:12.623: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 12:54:12.648: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:54:12.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-27zps" for this suite.
Feb 20 12:54:20.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:54:20.957: INFO: namespace: e2e-tests-statefulset-27zps, resource: bindings, ignored listing per whitelist
Feb 20 12:54:20.969: INFO: namespace e2e-tests-statefulset-27zps deletion completed in 8.279879793s

• [SLOW TEST:233.655 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:54:20.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:54:33.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-t64z2" for this suite.
Feb 20 12:55:25.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:55:25.686: INFO: namespace: e2e-tests-kubelet-test-t64z2, resource: bindings, ignored listing per whitelist
Feb 20 12:55:25.747: INFO: namespace e2e-tests-kubelet-test-t64z2 deletion completed in 52.2602091s

• [SLOW TEST:64.778 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:55:25.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 20 12:55:48.392: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 12:55:48.446: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 12:55:50.446: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 12:55:50.470: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 12:55:52.446: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 12:55:52.475: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 12:55:54.446: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 12:55:54.467: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 12:55:56.446: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 12:55:56.470: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 12:55:58.446: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 12:55:58.483: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 12:56:00.446: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 12:56:00.518: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 12:56:02.447: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 12:56:02.582: INFO: Pod pod-with-prestop-http-hook still exists
Feb 20 12:56:04.446: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 20 12:56:04.600: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:56:04.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gtrd6" for this suite.
Feb 20 12:56:28.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:56:28.707: INFO: namespace: e2e-tests-container-lifecycle-hook-gtrd6, resource: bindings, ignored listing per whitelist
Feb 20 12:56:28.779: INFO: namespace e2e-tests-container-lifecycle-hook-gtrd6 deletion completed in 24.131547546s

• [SLOW TEST:63.031 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:56:28.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 20 12:56:29.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69488614-53e0-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-pf458" to be "success or failure"
Feb 20 12:56:29.422: INFO: Pod "downwardapi-volume-69488614-53e0-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 154.325698ms
Feb 20 12:56:31.908: INFO: Pod "downwardapi-volume-69488614-53e0-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.640881483s
Feb 20 12:56:33.942: INFO: Pod "downwardapi-volume-69488614-53e0-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.674203963s
Feb 20 12:56:36.396: INFO: Pod "downwardapi-volume-69488614-53e0-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.128793206s
Feb 20 12:56:38.437: INFO: Pod "downwardapi-volume-69488614-53e0-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.169425084s
Feb 20 12:56:40.460: INFO: Pod "downwardapi-volume-69488614-53e0-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.192269026s
STEP: Saw pod success
Feb 20 12:56:40.460: INFO: Pod "downwardapi-volume-69488614-53e0-11ea-bcb7-0242ac110008" satisfied condition "success or failure"
Feb 20 12:56:40.470: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-69488614-53e0-11ea-bcb7-0242ac110008 container client-container: 
STEP: delete the pod
Feb 20 12:56:40.635: INFO: Waiting for pod downwardapi-volume-69488614-53e0-11ea-bcb7-0242ac110008 to disappear
Feb 20 12:56:40.641: INFO: Pod downwardapi-volume-69488614-53e0-11ea-bcb7-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 12:56:40.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pf458" for this suite.
Feb 20 12:56:48.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 12:56:48.857: INFO: namespace: e2e-tests-projected-pf458, resource: bindings, ignored listing per whitelist
Feb 20 12:56:48.949: INFO: namespace e2e-tests-projected-pf458 deletion completed in 8.301420743s

• [SLOW TEST:20.170 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 12:56:48.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-zjt4h
Feb 20 12:57:01.536: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-zjt4h
STEP: checking the pod's current state and verifying that restartCount is present
Feb 20 12:57:01.541: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 13:01:02.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-zjt4h" for this suite.
Feb 20 13:01:08.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 13:01:08.667: INFO: namespace: e2e-tests-container-probe-zjt4h, resource: bindings, ignored listing per whitelist
Feb 20 13:01:08.844: INFO: namespace e2e-tests-container-probe-zjt4h deletion completed in 6.36482222s

• [SLOW TEST:259.895 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 13:01:08.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-dsf9f
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-dsf9f
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-dsf9f
Feb 20 13:01:09.075: INFO: Found 0 stateful pods, waiting for 1
Feb 20 13:01:19.090: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 20 13:01:19.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 13:01:19.787: INFO: stderr: "I0220 13:01:19.325928    3717 log.go:172] (0xc0004964d0) (0xc0005e9400) Create stream\nI0220 13:01:19.326079    3717 log.go:172] (0xc0004964d0) (0xc0005e9400) Stream added, broadcasting: 1\nI0220 13:01:19.331820    3717 log.go:172] (0xc0004964d0) Reply frame received for 1\nI0220 13:01:19.331851    3717 log.go:172] (0xc0004964d0) (0xc000706000) Create stream\nI0220 13:01:19.331866    3717 log.go:172] (0xc0004964d0) (0xc000706000) Stream added, broadcasting: 3\nI0220 13:01:19.332732    3717 log.go:172] (0xc0004964d0) Reply frame received for 3\nI0220 13:01:19.332752    3717 log.go:172] (0xc0004964d0) (0xc0005e94a0) Create stream\nI0220 13:01:19.332759    3717 log.go:172] (0xc0004964d0) (0xc0005e94a0) Stream added, broadcasting: 5\nI0220 13:01:19.334250    3717 log.go:172] (0xc0004964d0) Reply frame received for 5\nI0220 13:01:19.565345    3717 log.go:172] (0xc0004964d0) Data frame received for 3\nI0220 13:01:19.565411    3717 log.go:172] (0xc000706000) (3) Data frame handling\nI0220 13:01:19.565436    3717 log.go:172] (0xc000706000) (3) Data frame sent\nI0220 13:01:19.771117    3717 log.go:172] (0xc0004964d0) (0xc000706000) Stream removed, broadcasting: 3\nI0220 13:01:19.771475    3717 log.go:172] (0xc0004964d0) Data frame received for 1\nI0220 13:01:19.771502    3717 log.go:172] (0xc0005e9400) (1) Data frame handling\nI0220 13:01:19.771526    3717 log.go:172] (0xc0005e9400) (1) Data frame sent\nI0220 13:01:19.771674    3717 log.go:172] (0xc0004964d0) (0xc0005e9400) Stream removed, broadcasting: 1\nI0220 13:01:19.771887    3717 log.go:172] (0xc0004964d0) (0xc0005e94a0) Stream removed, broadcasting: 5\nI0220 13:01:19.772019    3717 log.go:172] (0xc0004964d0) Go away received\nI0220 13:01:19.772094    3717 log.go:172] (0xc0004964d0) (0xc0005e9400) Stream removed, broadcasting: 1\nI0220 13:01:19.772116    3717 log.go:172] (0xc0004964d0) (0xc000706000) Stream removed, broadcasting: 3\nI0220 13:01:19.772132    3717 log.go:172] (0xc0004964d0) (0xc0005e94a0) Stream removed, broadcasting: 5\n"
Feb 20 13:01:19.787: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 13:01:19.788: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 13:01:19.975: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 13:01:19.975: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 13:01:19.995: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 20 13:01:30.166: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 20 13:01:30.166: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  }]
Feb 20 13:01:30.166: INFO: 
Feb 20 13:01:30.166: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 20 13:01:31.462: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.862947498s
Feb 20 13:01:32.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.566406253s
Feb 20 13:01:33.562: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.486127004s
Feb 20 13:01:34.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.467073312s
Feb 20 13:01:35.629: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.454438368s
Feb 20 13:01:37.577: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.399168742s
Feb 20 13:01:38.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.451863324s
Feb 20 13:01:40.019: INFO: Verifying statefulset ss doesn't scale past 3 for another 243.516357ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-dsf9f
Feb 20 13:01:41.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:01:41.535: INFO: stderr: "I0220 13:01:41.199848    3739 log.go:172] (0xc000734160) (0xc00065a1e0) Create stream\nI0220 13:01:41.199974    3739 log.go:172] (0xc000734160) (0xc00065a1e0) Stream added, broadcasting: 1\nI0220 13:01:41.204929    3739 log.go:172] (0xc000734160) Reply frame received for 1\nI0220 13:01:41.204951    3739 log.go:172] (0xc000734160) (0xc000562be0) Create stream\nI0220 13:01:41.204957    3739 log.go:172] (0xc000734160) (0xc000562be0) Stream added, broadcasting: 3\nI0220 13:01:41.205857    3739 log.go:172] (0xc000734160) Reply frame received for 3\nI0220 13:01:41.205890    3739 log.go:172] (0xc000734160) (0xc00065a280) Create stream\nI0220 13:01:41.205902    3739 log.go:172] (0xc000734160) (0xc00065a280) Stream added, broadcasting: 5\nI0220 13:01:41.206929    3739 log.go:172] (0xc000734160) Reply frame received for 5\nI0220 13:01:41.360232    3739 log.go:172] (0xc000734160) Data frame received for 3\nI0220 13:01:41.360287    3739 log.go:172] (0xc000562be0) (3) Data frame handling\nI0220 13:01:41.360300    3739 log.go:172] (0xc000562be0) (3) Data frame sent\nI0220 13:01:41.530312    3739 log.go:172] (0xc000734160) Data frame received for 1\nI0220 13:01:41.530341    3739 log.go:172] (0xc00065a1e0) (1) Data frame handling\nI0220 13:01:41.530357    3739 log.go:172] (0xc00065a1e0) (1) Data frame sent\nI0220 13:01:41.530368    3739 log.go:172] (0xc000734160) (0xc00065a1e0) Stream removed, broadcasting: 1\nI0220 13:01:41.530570    3739 log.go:172] (0xc000734160) (0xc000562be0) Stream removed, broadcasting: 3\nI0220 13:01:41.530612    3739 log.go:172] (0xc000734160) (0xc00065a280) Stream removed, broadcasting: 5\nI0220 13:01:41.530635    3739 log.go:172] (0xc000734160) (0xc00065a1e0) Stream removed, broadcasting: 1\nI0220 13:01:41.530643    3739 log.go:172] (0xc000734160) (0xc000562be0) Stream removed, broadcasting: 3\nI0220 13:01:41.530663    3739 log.go:172] (0xc000734160) (0xc00065a280) Stream removed, broadcasting: 5\nI0220 13:01:41.530683    3739 log.go:172] (0xc000734160) Go away received\n"
Feb 20 13:01:41.536: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 20 13:01:41.536: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 20 13:01:41.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:01:42.003: INFO: stderr: "I0220 13:01:41.747268    3761 log.go:172] (0xc00013a790) (0xc0005d1220) Create stream\nI0220 13:01:41.747483    3761 log.go:172] (0xc00013a790) (0xc0005d1220) Stream added, broadcasting: 1\nI0220 13:01:41.763717    3761 log.go:172] (0xc00013a790) Reply frame received for 1\nI0220 13:01:41.763780    3761 log.go:172] (0xc00013a790) (0xc0005d12c0) Create stream\nI0220 13:01:41.763787    3761 log.go:172] (0xc00013a790) (0xc0005d12c0) Stream added, broadcasting: 3\nI0220 13:01:41.764851    3761 log.go:172] (0xc00013a790) Reply frame received for 3\nI0220 13:01:41.764875    3761 log.go:172] (0xc00013a790) (0xc0005d1360) Create stream\nI0220 13:01:41.764886    3761 log.go:172] (0xc00013a790) (0xc0005d1360) Stream added, broadcasting: 5\nI0220 13:01:41.768419    3761 log.go:172] (0xc00013a790) Reply frame received for 5\nI0220 13:01:41.864600    3761 log.go:172] (0xc00013a790) Data frame received for 3\nI0220 13:01:41.864645    3761 log.go:172] (0xc0005d12c0) (3) Data frame handling\nI0220 13:01:41.864657    3761 log.go:172] (0xc0005d12c0) (3) Data frame sent\nI0220 13:01:41.865268    3761 log.go:172] (0xc00013a790) Data frame received for 5\nI0220 13:01:41.865281    3761 log.go:172] (0xc0005d1360) (5) Data frame handling\nI0220 13:01:41.865289    3761 log.go:172] (0xc0005d1360) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0220 13:01:41.992407    3761 log.go:172] (0xc00013a790) (0xc0005d12c0) Stream removed, broadcasting: 3\nI0220 13:01:41.992595    3761 log.go:172] (0xc00013a790) Data frame received for 1\nI0220 13:01:41.992608    3761 log.go:172] (0xc0005d1220) (1) Data frame handling\nI0220 13:01:41.992624    3761 log.go:172] (0xc0005d1220) (1) Data frame sent\nI0220 13:01:41.992637    3761 log.go:172] (0xc00013a790) (0xc0005d1220) Stream removed, broadcasting: 1\nI0220 13:01:41.992810    3761 log.go:172] (0xc00013a790) (0xc0005d1360) Stream removed, broadcasting: 5\nI0220 13:01:41.992849    3761 log.go:172] (0xc00013a790) (0xc0005d1220) Stream removed, broadcasting: 1\nI0220 13:01:41.992861    3761 log.go:172] (0xc00013a790) (0xc0005d12c0) Stream removed, broadcasting: 3\nI0220 13:01:41.992872    3761 log.go:172] (0xc00013a790) (0xc0005d1360) Stream removed, broadcasting: 5\n"
Feb 20 13:01:42.003: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 20 13:01:42.003: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 20 13:01:42.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:01:42.647: INFO: stderr: "I0220 13:01:42.172529    3782 log.go:172] (0xc000708370) (0xc0007265a0) Create stream\nI0220 13:01:42.172849    3782 log.go:172] (0xc000708370) (0xc0007265a0) Stream added, broadcasting: 1\nI0220 13:01:42.178424    3782 log.go:172] (0xc000708370) Reply frame received for 1\nI0220 13:01:42.178446    3782 log.go:172] (0xc000708370) (0xc0005b0d20) Create stream\nI0220 13:01:42.178454    3782 log.go:172] (0xc000708370) (0xc0005b0d20) Stream added, broadcasting: 3\nI0220 13:01:42.179639    3782 log.go:172] (0xc000708370) Reply frame received for 3\nI0220 13:01:42.179660    3782 log.go:172] (0xc000708370) (0xc000672000) Create stream\nI0220 13:01:42.179669    3782 log.go:172] (0xc000708370) (0xc000672000) Stream added, broadcasting: 5\nI0220 13:01:42.180534    3782 log.go:172] (0xc000708370) Reply frame received for 5\nI0220 13:01:42.290686    3782 log.go:172] (0xc000708370) Data frame received for 3\nI0220 13:01:42.290757    3782 log.go:172] (0xc0005b0d20) (3) Data frame handling\nI0220 13:01:42.290772    3782 log.go:172] (0xc0005b0d20) (3) Data frame sent\nI0220 13:01:42.290831    3782 log.go:172] (0xc000708370) Data frame received for 5\nI0220 13:01:42.290837    3782 log.go:172] (0xc000672000) (5) Data frame handling\nI0220 13:01:42.290846    3782 log.go:172] (0xc000672000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0220 13:01:42.642312    3782 log.go:172] (0xc000708370) (0xc0005b0d20) Stream removed, broadcasting: 3\nI0220 13:01:42.642428    3782 log.go:172] (0xc000708370) Data frame received for 1\nI0220 13:01:42.642446    3782 log.go:172] (0xc000708370) (0xc000672000) Stream removed, broadcasting: 5\nI0220 13:01:42.642497    3782 log.go:172] (0xc0007265a0) (1) Data frame handling\nI0220 13:01:42.642516    3782 log.go:172] (0xc0007265a0) (1) Data frame sent\nI0220 13:01:42.642577    3782 log.go:172] (0xc000708370) (0xc0007265a0) Stream removed, broadcasting: 1\nI0220 13:01:42.642592    3782 log.go:172] (0xc000708370) Go away received\nI0220 13:01:42.642922    3782 log.go:172] (0xc000708370) (0xc0007265a0) Stream removed, broadcasting: 1\nI0220 13:01:42.642941    3782 log.go:172] (0xc000708370) (0xc0005b0d20) Stream removed, broadcasting: 3\nI0220 13:01:42.642948    3782 log.go:172] (0xc000708370) (0xc000672000) Stream removed, broadcasting: 5\n"
Feb 20 13:01:42.648: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 20 13:01:42.648: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 20 13:01:42.705: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 13:01:42.706: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false
Feb 20 13:01:52.722: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 13:01:52.722: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 20 13:01:52.722: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 20 13:01:52.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 13:01:53.340: INFO: stderr: "I0220 13:01:52.961606    3803 log.go:172] (0xc0006f62c0) (0xc0005c9360) Create stream\nI0220 13:01:52.961832    3803 log.go:172] (0xc0006f62c0) (0xc0005c9360) Stream added, broadcasting: 1\nI0220 13:01:52.967172    3803 log.go:172] (0xc0006f62c0) Reply frame received for 1\nI0220 13:01:52.967225    3803 log.go:172] (0xc0006f62c0) (0xc0005c9400) Create stream\nI0220 13:01:52.967240    3803 log.go:172] (0xc0006f62c0) (0xc0005c9400) Stream added, broadcasting: 3\nI0220 13:01:52.968641    3803 log.go:172] (0xc0006f62c0) Reply frame received for 3\nI0220 13:01:52.968662    3803 log.go:172] (0xc0006f62c0) (0xc0002fe000) Create stream\nI0220 13:01:52.968669    3803 log.go:172] (0xc0006f62c0) (0xc0002fe000) Stream added, broadcasting: 5\nI0220 13:01:52.969542    3803 log.go:172] (0xc0006f62c0) Reply frame received for 5\nI0220 13:01:53.150881    3803 log.go:172] (0xc0006f62c0) Data frame received for 3\nI0220 13:01:53.151011    3803 log.go:172] (0xc0005c9400) (3) Data frame handling\nI0220 13:01:53.151031    3803 log.go:172] (0xc0005c9400) (3) Data frame sent\nI0220 13:01:53.331558    3803 log.go:172] (0xc0006f62c0) (0xc0005c9400) Stream removed, broadcasting: 3\nI0220 13:01:53.331872    3803 log.go:172] (0xc0006f62c0) Data frame received for 1\nI0220 13:01:53.331967    3803 log.go:172] (0xc0006f62c0) (0xc0002fe000) Stream removed, broadcasting: 5\nI0220 13:01:53.332045    3803 log.go:172] (0xc0005c9360) (1) Data frame handling\nI0220 13:01:53.332086    3803 log.go:172] (0xc0005c9360) (1) Data frame sent\nI0220 13:01:53.332119    3803 log.go:172] (0xc0006f62c0) (0xc0005c9360) Stream removed, broadcasting: 1\nI0220 13:01:53.332176    3803 log.go:172] (0xc0006f62c0) Go away received\nI0220 13:01:53.332388    3803 log.go:172] (0xc0006f62c0) (0xc0005c9360) Stream removed, broadcasting: 1\nI0220 13:01:53.332405    3803 log.go:172] (0xc0006f62c0) (0xc0005c9400) Stream removed, broadcasting: 3\nI0220 13:01:53.332421    3803 log.go:172] (0xc0006f62c0) (0xc0002fe000) Stream removed, broadcasting: 5\n"
Feb 20 13:01:53.341: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 13:01:53.341: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 13:01:53.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 13:01:54.197: INFO: stderr: "I0220 13:01:53.542036    3825 log.go:172] (0xc0007082c0) (0xc0007ac640) Create stream\nI0220 13:01:53.542528    3825 log.go:172] (0xc0007082c0) (0xc0007ac640) Stream added, broadcasting: 1\nI0220 13:01:53.558671    3825 log.go:172] (0xc0007082c0) Reply frame received for 1\nI0220 13:01:53.558730    3825 log.go:172] (0xc0007082c0) (0xc00059adc0) Create stream\nI0220 13:01:53.558748    3825 log.go:172] (0xc0007082c0) (0xc00059adc0) Stream added, broadcasting: 3\nI0220 13:01:53.560482    3825 log.go:172] (0xc0007082c0) Reply frame received for 3\nI0220 13:01:53.560522    3825 log.go:172] (0xc0007082c0) (0xc00059af00) Create stream\nI0220 13:01:53.560530    3825 log.go:172] (0xc0007082c0) (0xc00059af00) Stream added, broadcasting: 5\nI0220 13:01:53.562202    3825 log.go:172] (0xc0007082c0) Reply frame received for 5\nI0220 13:01:53.980831    3825 log.go:172] (0xc0007082c0) Data frame received for 3\nI0220 13:01:53.980931    3825 log.go:172] (0xc00059adc0) (3) Data frame handling\nI0220 13:01:53.980964    3825 log.go:172] (0xc00059adc0) (3) Data frame sent\nI0220 13:01:54.193068    3825 log.go:172] (0xc0007082c0) (0xc00059adc0) Stream removed, broadcasting: 3\nI0220 13:01:54.193174    3825 log.go:172] (0xc0007082c0) Data frame received for 1\nI0220 13:01:54.193188    3825 log.go:172] (0xc0007082c0) (0xc00059af00) Stream removed, broadcasting: 5\nI0220 13:01:54.193265    3825 log.go:172] (0xc0007ac640) (1) Data frame handling\nI0220 13:01:54.193290    3825 log.go:172] (0xc0007ac640) (1) Data frame sent\nI0220 13:01:54.193304    3825 log.go:172] (0xc0007082c0) (0xc0007ac640) Stream removed, broadcasting: 1\nI0220 13:01:54.193353    3825 log.go:172] (0xc0007082c0) Go away received\nI0220 13:01:54.193596    3825 log.go:172] (0xc0007082c0) (0xc0007ac640) Stream removed, broadcasting: 1\nI0220 13:01:54.193617    3825 log.go:172] (0xc0007082c0) (0xc00059adc0) Stream removed, broadcasting: 3\nI0220 13:01:54.193640    3825 log.go:172] (0xc0007082c0) (0xc00059af00) Stream removed, broadcasting: 5\n"
Feb 20 13:01:54.197: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 13:01:54.197: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 13:01:54.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 20 13:01:54.995: INFO: stderr: "I0220 13:01:54.365528    3847 log.go:172] (0xc00015c6e0) (0xc00073e640) Create stream\nI0220 13:01:54.365675    3847 log.go:172] (0xc00015c6e0) (0xc00073e640) Stream added, broadcasting: 1\nI0220 13:01:54.371970    3847 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0220 13:01:54.372009    3847 log.go:172] (0xc00015c6e0) (0xc0005a4c80) Create stream\nI0220 13:01:54.372018    3847 log.go:172] (0xc00015c6e0) (0xc0005a4c80) Stream added, broadcasting: 3\nI0220 13:01:54.373481    3847 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0220 13:01:54.373502    3847 log.go:172] (0xc00015c6e0) (0xc0005a4dc0) Create stream\nI0220 13:01:54.373509    3847 log.go:172] (0xc00015c6e0) (0xc0005a4dc0) Stream added, broadcasting: 5\nI0220 13:01:54.374892    3847 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0220 13:01:54.809603    3847 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0220 13:01:54.809682    3847 log.go:172] (0xc0005a4c80) (3) Data frame handling\nI0220 13:01:54.809703    3847 log.go:172] (0xc0005a4c80) (3) Data frame sent\nI0220 13:01:54.988413    3847 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0220 13:01:54.988587    3847 log.go:172] (0xc00015c6e0) (0xc0005a4c80) Stream removed, broadcasting: 3\nI0220 13:01:54.988665    3847 log.go:172] (0xc00073e640) (1) Data frame handling\nI0220 13:01:54.988695    3847 log.go:172] (0xc00073e640) (1) Data frame sent\nI0220 13:01:54.988703    3847 log.go:172] (0xc00015c6e0) (0xc00073e640) Stream removed, broadcasting: 1\nI0220 13:01:54.988968    3847 log.go:172] (0xc00015c6e0) (0xc0005a4dc0) Stream removed, broadcasting: 5\nI0220 13:01:54.989004    3847 log.go:172] (0xc00015c6e0) (0xc00073e640) Stream removed, broadcasting: 1\nI0220 13:01:54.989014    3847 log.go:172] (0xc00015c6e0) (0xc0005a4c80) Stream removed, broadcasting: 3\nI0220 13:01:54.989029    3847 log.go:172] (0xc00015c6e0) (0xc0005a4dc0) Stream removed, broadcasting: 5\nI0220 13:01:54.989138    3847 log.go:172] (0xc00015c6e0) Go away received\n"
Feb 20 13:01:54.995: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 20 13:01:54.995: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 20 13:01:54.995: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 13:01:55.064: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 20 13:02:05.092: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 13:02:05.092: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 13:02:05.092: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 20 13:02:05.140: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 20 13:02:05.140: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  }]
Feb 20 13:02:05.140: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:05.140: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:05.140: INFO: 
Feb 20 13:02:05.140: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 13:02:07.402: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 20 13:02:07.402: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  }]
Feb 20 13:02:07.402: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:07.402: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:07.402: INFO: 
Feb 20 13:02:07.402: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 13:02:08.875: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 20 13:02:08.875: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  }]
Feb 20 13:02:08.875: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:08.875: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:08.876: INFO: 
Feb 20 13:02:08.876: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 13:02:10.020: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 20 13:02:10.021: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  }]
Feb 20 13:02:10.021: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:10.021: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:10.021: INFO: 
Feb 20 13:02:10.021: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 13:02:11.204: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 20 13:02:11.205: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  }]
Feb 20 13:02:11.205: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:11.205: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:11.205: INFO: 
Feb 20 13:02:11.205: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 13:02:13.190: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 20 13:02:13.190: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  }]
Feb 20 13:02:13.190: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:13.190: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:13.190: INFO: 
Feb 20 13:02:13.190: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 20 13:02:14.247: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 20 13:02:14.247: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:09 +0000 UTC  }]
Feb 20 13:02:14.247: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:14.247: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-20 13:01:30 +0000 UTC  }]
Feb 20 13:02:14.247: INFO: 
Feb 20 13:02:14.247: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-dsf9f
Feb 20 13:02:15.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:02:15.622: INFO: rc: 1
Feb 20 13:02:15.622: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0014d11d0 exit status 1   true [0xc000302b88 0xc000302ec0 0xc000303048] [0xc000302b88 0xc000302ec0 0xc000303048] [0xc000302df0 0xc000302f38] [0x935700 0x935700] 0xc00264ff20 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 20 13:02:25.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:02:25.746: INFO: rc: 1
Feb 20 13:02:25.746: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001516a20 exit status 1   true [0xc002204090 0xc0022040a8 0xc0022040c0] [0xc002204090 0xc0022040a8 0xc0022040c0] [0xc0022040a0 0xc0022040b8] [0x935700 0x935700] 0xc002a0c840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:02:35.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:02:35.878: INFO: rc: 1
Feb 20 13:02:35.878: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024f78f0 exit status 1   true [0xc001cb40a0 0xc001cb40b8 0xc001cb40d0] [0xc001cb40a0 0xc001cb40b8 0xc001cb40d0] [0xc001cb40b0 0xc001cb40c8] [0x935700 0x935700] 0xc00240af60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:02:45.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:02:45.993: INFO: rc: 1
Feb 20 13:02:45.993: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001516d50 exit status 1   true [0xc0022040c8 0xc002204100 0xc002204150] [0xc0022040c8 0xc002204100 0xc002204150] [0xc0022040f0 0xc002204140] [0x935700 0x935700] 0xc002a0df80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:02:55.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:02:56.098: INFO: rc: 1
Feb 20 13:02:56.099: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a34120 exit status 1   true [0xc0014e8000 0xc0014e8018 0xc0014e8030] [0xc0014e8000 0xc0014e8018 0xc0014e8030] [0xc0014e8010 0xc0014e8028] [0x935700 0x935700] 0xc0022ce240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:03:06.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:03:06.208: INFO: rc: 1
Feb 20 13:03:06.208: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a34240 exit status 1   true [0xc0014e8038 0xc0014e8050 0xc0014e8068] [0xc0014e8038 0xc0014e8050 0xc0014e8068] [0xc0014e8048 0xc0014e8060] [0x935700 0x935700] 0xc0022ce4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:03:16.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:03:16.357: INFO: rc: 1
Feb 20 13:03:16.357: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a34360 exit status 1   true [0xc0014e8070 0xc0014e8088 0xc0014e80a0] [0xc0014e8070 0xc0014e8088 0xc0014e80a0] [0xc0014e8080 0xc0014e8098] [0x935700 0x935700] 0xc0022ce780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:03:26.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:03:26.446: INFO: rc: 1
Feb 20 13:03:26.447: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024f7aa0 exit status 1   true [0xc001cb40d8 0xc001cb40f0 0xc001cb4108] [0xc001cb40d8 0xc001cb40f0 0xc001cb4108] [0xc001cb40e8 0xc001cb4100] [0x935700 0x935700] 0xc00240b4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:03:36.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:03:36.623: INFO: rc: 1
Feb 20 13:03:36.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a34480 exit status 1   true [0xc0014e80a8 0xc0014e80c0 0xc0014e80d8] [0xc0014e80a8 0xc0014e80c0 0xc0014e80d8] [0xc0014e80b8 0xc0014e80d0] [0x935700 0x935700] 0xc0022cea20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:03:46.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:03:46.759: INFO: rc: 1
Feb 20 13:03:46.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a345d0 exit status 1   true [0xc0014e80e0 0xc0014e80f8 0xc0014e8110] [0xc0014e80e0 0xc0014e80f8 0xc0014e8110] [0xc0014e80f0 0xc0014e8108] [0x935700 0x935700] 0xc0022cecc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:03:56.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:03:56.839: INFO: rc: 1
Feb 20 13:03:56.839: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024f7c20 exit status 1   true [0xc001cb4110 0xc001cb4128 0xc001cb4140] [0xc001cb4110 0xc001cb4128 0xc001cb4140] [0xc001cb4120 0xc001cb4138] [0x935700 0x935700] 0xc00240b740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:04:06.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:04:07.015: INFO: rc: 1
Feb 20 13:04:07.015: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a346f0 exit status 1   true [0xc0014e8120 0xc0014e8138 0xc0014e8150] [0xc0014e8120 0xc0014e8138 0xc0014e8150] [0xc0014e8130 0xc0014e8148] [0x935700 0x935700] 0xc0022cef60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:04:17.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:04:17.169: INFO: rc: 1
Feb 20 13:04:17.169: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b0a4e0 exit status 1   true [0xc00016e000 0xc000302188 0xc000302390] [0xc00016e000 0xc000302188 0xc000302390] [0xc000302140 0xc0003022f8] [0x935700 0x935700] 0xc00264e9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:04:27.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:04:27.318: INFO: rc: 1
Feb 20 13:04:27.319: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014d0120 exit status 1   true [0xc0014e8000 0xc0014e8018 0xc0014e8030] [0xc0014e8000 0xc0014e8018 0xc0014e8030] [0xc0014e8010 0xc0014e8028] [0x935700 0x935700] 0xc002a0c240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:04:37.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:04:37.447: INFO: rc: 1
Feb 20 13:04:37.448: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a34150 exit status 1   true [0xc002204000 0xc002204020 0xc002204038] [0xc002204000 0xc002204020 0xc002204038] [0xc002204018 0xc002204030] [0x935700 0x935700] 0xc002820720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:04:47.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:04:47.595: INFO: rc: 1
Feb 20 13:04:47.596: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b0a6f0 exit status 1   true [0xc0003023a0 0xc000302660 0xc000302780] [0xc0003023a0 0xc000302660 0xc000302780] [0xc000302538 0xc000302720] [0x935700 0x935700] 0xc00264f7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:04:57.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:04:57.730: INFO: rc: 1
Feb 20 13:04:57.730: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014d02a0 exit status 1   true [0xc0014e8038 0xc0014e8050 0xc0014e8068] [0xc0014e8038 0xc0014e8050 0xc0014e8068] [0xc0014e8048 0xc0014e8060] [0x935700 0x935700] 0xc002a0c4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:05:07.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:05:07.903: INFO: rc: 1
Feb 20 13:05:07.903: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a342d0 exit status 1   true [0xc002204040 0xc002204060 0xc002204078] [0xc002204040 0xc002204060 0xc002204078] [0xc002204058 0xc002204070] [0x935700 0x935700] 0xc002820d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:05:17.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:05:18.026: INFO: rc: 1
Feb 20 13:05:18.026: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b0a990 exit status 1   true [0xc000302790 0xc0003029a8 0xc000302a28] [0xc000302790 0xc0003029a8 0xc000302a28] [0xc0003028d0 0xc000302a20] [0x935700 0x935700] 0xc00264fa40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:05:28.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:05:28.154: INFO: rc: 1
Feb 20 13:05:28.154: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001516120 exit status 1   true [0xc001cb4000 0xc001cb4018 0xc001cb4030] [0xc001cb4000 0xc001cb4018 0xc001cb4030] [0xc001cb4010 0xc001cb4028] [0x935700 0x935700] 0xc0022ce240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:05:38.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:05:38.255: INFO: rc: 1
Feb 20 13:05:38.255: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b0ac90 exit status 1   true [0xc000302a60 0xc000302b28 0xc000302c88] [0xc000302a60 0xc000302b28 0xc000302c88] [0xc000302ab8 0xc000302b88] [0x935700 0x935700] 0xc00264fe60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:05:48.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:05:48.400: INFO: rc: 1
Feb 20 13:05:48.401: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014d0420 exit status 1   true [0xc0014e8070 0xc0014e8088 0xc0014e80a0] [0xc0014e8070 0xc0014e8088 0xc0014e80a0] [0xc0014e8080 0xc0014e8098] [0x935700 0x935700] 0xc002a0de60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:05:58.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:05:58.583: INFO: rc: 1
Feb 20 13:05:58.584: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001516270 exit status 1   true [0xc001cb4038 0xc001cb4050 0xc001cb4068] [0xc001cb4038 0xc001cb4050 0xc001cb4068] [0xc001cb4048 0xc001cb4060] [0x935700 0x935700] 0xc0022ce4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:06:08.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:06:08.703: INFO: rc: 1
Feb 20 13:06:08.703: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024f6240 exit status 1   true [0xc000fea008 0xc000fea020 0xc000fea038] [0xc000fea008 0xc000fea020 0xc000fea038] [0xc000fea018 0xc000fea030] [0x935700 0x935700] 0xc00240a720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:06:18.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:06:18.792: INFO: rc: 1
Feb 20 13:06:18.793: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b0a510 exit status 1   true [0xc00016e000 0xc0014e8010 0xc0014e8028] [0xc00016e000 0xc0014e8010 0xc0014e8028] [0xc0014e8008 0xc0014e8020] [0x935700 0x935700] 0xc00264e9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:06:28.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:06:28.978: INFO: rc: 1
Feb 20 13:06:28.978: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014d0150 exit status 1   true [0xc000fea040 0xc000fea058 0xc000fea070] [0xc000fea040 0xc000fea058 0xc000fea070] [0xc000fea050 0xc000fea068] [0x935700 0x935700] 0xc00240aa20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:06:38.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:06:39.101: INFO: rc: 1
Feb 20 13:06:39.101: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b0a780 exit status 1   true [0xc0014e8030 0xc0014e8048 0xc0014e8060] [0xc0014e8030 0xc0014e8048 0xc0014e8060] [0xc0014e8040 0xc0014e8058] [0x935700 0x935700] 0xc00264f7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:06:49.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:06:49.187: INFO: rc: 1
Feb 20 13:06:49.188: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014d0270 exit status 1   true [0xc000fea078 0xc000fea090 0xc000fea0a8] [0xc000fea078 0xc000fea090 0xc000fea0a8] [0xc000fea088 0xc000fea0a0] [0x935700 0x935700] 0xc00240acc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:06:59.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:06:59.346: INFO: rc: 1
Feb 20 13:06:59.346: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014d03f0 exit status 1   true [0xc000fea0b0 0xc000fea0c8 0xc000fea0e0] [0xc000fea0b0 0xc000fea0c8 0xc000fea0e0] [0xc000fea0c0 0xc000fea0d8] [0x935700 0x935700] 0xc00240af60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:07:09.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:07:09.516: INFO: rc: 1
Feb 20 13:07:09.516: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a34120 exit status 1   true [0xc002204000 0xc002204020 0xc002204038] [0xc002204000 0xc002204020 0xc002204038] [0xc002204018 0xc002204030] [0x935700 0x935700] 0xc002a0c240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 20 13:07:19.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dsf9f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 20 13:07:19.647: INFO: rc: 1
Feb 20 13:07:19.647: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb 20 13:07:19.647: INFO: Scaling statefulset ss to 0
Feb 20 13:07:19.676: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 20 13:07:19.681: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dsf9f
Feb 20 13:07:19.688: INFO: Scaling statefulset ss to 0
Feb 20 13:07:19.700: INFO: Waiting for statefulset status.replicas updated to 0
Feb 20 13:07:19.703: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 13:07:19.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-dsf9f" for this suite.
Feb 20 13:07:27.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 13:07:28.096: INFO: namespace: e2e-tests-statefulset-dsf9f, resource: bindings, ignored listing per whitelist
Feb 20 13:07:28.109: INFO: namespace e2e-tests-statefulset-dsf9f deletion completed in 8.346796186s

• [SLOW TEST:379.265 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 13:07:28.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-9l48z
I0220 13:07:28.335916       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-9l48z, replica count: 1
I0220 13:07:29.386710       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 13:07:30.387214       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 13:07:31.387850       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 13:07:32.388396       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 13:07:33.388835       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 13:07:34.389161       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 13:07:35.389501       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 13:07:36.389746       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 13:07:37.390412       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 13:07:38.391112       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0220 13:07:39.391523       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 20 13:07:39.735: INFO: Created: latency-svc-7nxpt
Feb 20 13:07:39.767: INFO: Got endpoints: latency-svc-7nxpt [275.452821ms]
Feb 20 13:07:40.000: INFO: Created: latency-svc-mjfnr
Feb 20 13:07:40.034: INFO: Got endpoints: latency-svc-mjfnr [264.900616ms]
Feb 20 13:07:40.097: INFO: Created: latency-svc-zbpdm
Feb 20 13:07:40.231: INFO: Got endpoints: latency-svc-zbpdm [461.936989ms]
Feb 20 13:07:40.249: INFO: Created: latency-svc-97r4h
Feb 20 13:07:40.269: INFO: Got endpoints: latency-svc-97r4h [500.583677ms]
Feb 20 13:07:40.313: INFO: Created: latency-svc-zvpl5
Feb 20 13:07:40.482: INFO: Got endpoints: latency-svc-zvpl5 [713.329061ms]
Feb 20 13:07:40.560: INFO: Created: latency-svc-th4pb
Feb 20 13:07:40.758: INFO: Got endpoints: latency-svc-th4pb [989.783475ms]
Feb 20 13:07:40.769: INFO: Created: latency-svc-4r7lx
Feb 20 13:07:40.825: INFO: Created: latency-svc-qk5z5
Feb 20 13:07:40.838: INFO: Got endpoints: latency-svc-4r7lx [1.070152055s]
Feb 20 13:07:41.034: INFO: Got endpoints: latency-svc-qk5z5 [1.265064185s]
Feb 20 13:07:41.066: INFO: Created: latency-svc-8clv8
Feb 20 13:07:41.293: INFO: Got endpoints: latency-svc-8clv8 [1.524695451s]
Feb 20 13:07:41.330: INFO: Created: latency-svc-ggt56
Feb 20 13:07:41.351: INFO: Got endpoints: latency-svc-ggt56 [1.582581723s]
Feb 20 13:07:41.543: INFO: Created: latency-svc-j5s6h
Feb 20 13:07:41.578: INFO: Got endpoints: latency-svc-j5s6h [1.810820485s]
Feb 20 13:07:41.759: INFO: Created: latency-svc-ps9wp
Feb 20 13:07:41.836: INFO: Got endpoints: latency-svc-ps9wp [2.068852127s]
Feb 20 13:07:42.013: INFO: Created: latency-svc-bnbsd
Feb 20 13:07:42.021: INFO: Got endpoints: latency-svc-bnbsd [2.251759334s]
Feb 20 13:07:42.083: INFO: Created: latency-svc-fd6nh
Feb 20 13:07:42.272: INFO: Got endpoints: latency-svc-fd6nh [2.503727604s]
Feb 20 13:07:42.307: INFO: Created: latency-svc-jwc8r
Feb 20 13:07:42.481: INFO: Got endpoints: latency-svc-jwc8r [2.711369831s]
Feb 20 13:07:42.517: INFO: Created: latency-svc-lmfn9
Feb 20 13:07:42.563: INFO: Got endpoints: latency-svc-lmfn9 [2.794432396s]
Feb 20 13:07:42.727: INFO: Created: latency-svc-pzv99
Feb 20 13:07:42.771: INFO: Got endpoints: latency-svc-pzv99 [2.736682005s]
Feb 20 13:07:42.821: INFO: Created: latency-svc-cxxq7
Feb 20 13:07:42.991: INFO: Got endpoints: latency-svc-cxxq7 [2.75998239s]
Feb 20 13:07:43.014: INFO: Created: latency-svc-mcsms
Feb 20 13:07:43.030: INFO: Got endpoints: latency-svc-mcsms [2.761052416s]
Feb 20 13:07:43.168: INFO: Created: latency-svc-bd6tz
Feb 20 13:07:43.170: INFO: Got endpoints: latency-svc-bd6tz [2.687911314s]
Feb 20 13:07:43.223: INFO: Created: latency-svc-p9t5s
Feb 20 13:07:43.241: INFO: Got endpoints: latency-svc-p9t5s [2.483780263s]
Feb 20 13:07:43.382: INFO: Created: latency-svc-l9p7f
Feb 20 13:07:43.407: INFO: Got endpoints: latency-svc-l9p7f [2.569056545s]
Feb 20 13:07:43.642: INFO: Created: latency-svc-c2jw5
Feb 20 13:07:43.642: INFO: Got endpoints: latency-svc-c2jw5 [2.608027748s]
Feb 20 13:07:43.842: INFO: Created: latency-svc-w424n
Feb 20 13:07:43.853: INFO: Got endpoints: latency-svc-w424n [2.560368918s]
Feb 20 13:07:44.137: INFO: Created: latency-svc-gldcx
Feb 20 13:07:44.161: INFO: Got endpoints: latency-svc-gldcx [2.809967937s]
Feb 20 13:07:44.407: INFO: Created: latency-svc-n7j2c
Feb 20 13:07:44.443: INFO: Got endpoints: latency-svc-n7j2c [2.86462772s]
Feb 20 13:07:45.487: INFO: Created: latency-svc-cmghm
Feb 20 13:07:45.500: INFO: Got endpoints: latency-svc-cmghm [3.663857363s]
Feb 20 13:07:45.518: INFO: Created: latency-svc-qwtdb
Feb 20 13:07:45.534: INFO: Got endpoints: latency-svc-qwtdb [3.512571117s]
Feb 20 13:07:45.670: INFO: Created: latency-svc-9x4xc
Feb 20 13:07:45.732: INFO: Created: latency-svc-2mp9t
Feb 20 13:07:45.736: INFO: Got endpoints: latency-svc-9x4xc [3.463324922s]
Feb 20 13:07:45.744: INFO: Got endpoints: latency-svc-2mp9t [3.263508874s]
Feb 20 13:07:45.946: INFO: Created: latency-svc-682gx
Feb 20 13:07:45.962: INFO: Got endpoints: latency-svc-682gx [3.398017365s]
Feb 20 13:07:46.196: INFO: Created: latency-svc-p8f6t
Feb 20 13:07:46.220: INFO: Got endpoints: latency-svc-p8f6t [3.448888634s]
Feb 20 13:07:46.390: INFO: Created: latency-svc-xsc72
Feb 20 13:07:46.413: INFO: Got endpoints: latency-svc-xsc72 [3.420705441s]
Feb 20 13:07:47.199: INFO: Created: latency-svc-mwwrh
Feb 20 13:07:47.203: INFO: Got endpoints: latency-svc-mwwrh [4.172628271s]
Feb 20 13:07:47.463: INFO: Created: latency-svc-2mntx
Feb 20 13:07:47.485: INFO: Got endpoints: latency-svc-2mntx [4.314817528s]
Feb 20 13:07:47.683: INFO: Created: latency-svc-9wgxq
Feb 20 13:07:47.702: INFO: Got endpoints: latency-svc-9wgxq [4.46063443s]
Feb 20 13:07:47.911: INFO: Created: latency-svc-48wgf
Feb 20 13:07:48.010: INFO: Got endpoints: latency-svc-48wgf [4.60212355s]
Feb 20 13:07:48.095: INFO: Created: latency-svc-kwgxr
Feb 20 13:07:48.189: INFO: Got endpoints: latency-svc-kwgxr [4.547264244s]
Feb 20 13:07:48.256: INFO: Created: latency-svc-jhprr
Feb 20 13:07:48.366: INFO: Got endpoints: latency-svc-jhprr [4.512376037s]
Feb 20 13:07:48.393: INFO: Created: latency-svc-fdgjf
Feb 20 13:07:48.405: INFO: Got endpoints: latency-svc-fdgjf [4.243342227s]
Feb 20 13:07:48.633: INFO: Created: latency-svc-xrhs5
Feb 20 13:07:48.669: INFO: Got endpoints: latency-svc-xrhs5 [4.225660785s]
Feb 20 13:07:49.040: INFO: Created: latency-svc-ttb4h
Feb 20 13:07:49.073: INFO: Got endpoints: latency-svc-ttb4h [3.572137261s]
Feb 20 13:07:49.275: INFO: Created: latency-svc-zhvnz
Feb 20 13:07:49.291: INFO: Got endpoints: latency-svc-zhvnz [3.757332109s]
Feb 20 13:07:49.509: INFO: Created: latency-svc-bt8z2
Feb 20 13:07:49.538: INFO: Got endpoints: latency-svc-bt8z2 [3.802678054s]
Feb 20 13:07:49.746: INFO: Created: latency-svc-7vlcc
Feb 20 13:07:49.780: INFO: Got endpoints: latency-svc-7vlcc [4.035627636s]
Feb 20 13:07:50.051: INFO: Created: latency-svc-2q2ts
Feb 20 13:07:50.075: INFO: Got endpoints: latency-svc-2q2ts [4.11317443s]
Feb 20 13:07:50.383: INFO: Created: latency-svc-l5nj5
Feb 20 13:07:50.418: INFO: Got endpoints: latency-svc-l5nj5 [4.196915211s]
Feb 20 13:07:50.831: INFO: Created: latency-svc-hw7ws
Feb 20 13:07:51.020: INFO: Got endpoints: latency-svc-hw7ws [4.607279335s]
Feb 20 13:07:51.112: INFO: Created: latency-svc-gsf4n
Feb 20 13:07:51.190: INFO: Got endpoints: latency-svc-gsf4n [3.986703326s]
Feb 20 13:07:51.449: INFO: Created: latency-svc-z78v9
Feb 20 13:07:51.492: INFO: Got endpoints: latency-svc-z78v9 [4.006150672s]
Feb 20 13:07:51.727: INFO: Created: latency-svc-x4s62
Feb 20 13:07:51.886: INFO: Got endpoints: latency-svc-x4s62 [4.183613755s]
Feb 20 13:07:51.921: INFO: Created: latency-svc-8wk75
Feb 20 13:07:51.966: INFO: Got endpoints: latency-svc-8wk75 [3.955669563s]
Feb 20 13:07:52.345: INFO: Created: latency-svc-k8w8t
Feb 20 13:07:52.417: INFO: Got endpoints: latency-svc-k8w8t [4.227476921s]
Feb 20 13:07:52.572: INFO: Created: latency-svc-wl75k
Feb 20 13:07:52.882: INFO: Got endpoints: latency-svc-wl75k [4.515165865s]
Feb 20 13:07:52.887: INFO: Created: latency-svc-h787q
Feb 20 13:07:52.914: INFO: Got endpoints: latency-svc-h787q [4.5093816s]
Feb 20 13:07:53.149: INFO: Created: latency-svc-7vmq8
Feb 20 13:07:53.172: INFO: Got endpoints: latency-svc-7vmq8 [4.502861043s]
Feb 20 13:07:53.326: INFO: Created: latency-svc-f8qpk
Feb 20 13:07:53.400: INFO: Got endpoints: latency-svc-f8qpk [4.327462242s]
Feb 20 13:07:53.511: INFO: Created: latency-svc-dpl46
Feb 20 13:07:53.520: INFO: Got endpoints: latency-svc-dpl46 [4.229133299s]
Feb 20 13:07:53.696: INFO: Created: latency-svc-vlm6m
Feb 20 13:07:53.833: INFO: Got endpoints: latency-svc-vlm6m [4.294989238s]
Feb 20 13:07:53.852: INFO: Created: latency-svc-wn6jn
Feb 20 13:07:53.881: INFO: Got endpoints: latency-svc-wn6jn [4.101115323s]
Feb 20 13:07:54.079: INFO: Created: latency-svc-9d4q6
Feb 20 13:07:54.114: INFO: Got endpoints: latency-svc-9d4q6 [4.038233745s]
Feb 20 13:07:54.237: INFO: Created: latency-svc-whlrl
Feb 20 13:07:54.259: INFO: Got endpoints: latency-svc-whlrl [3.841083474s]
Feb 20 13:07:54.551: INFO: Created: latency-svc-btrcx
Feb 20 13:07:54.835: INFO: Got endpoints: latency-svc-btrcx [3.814398135s]
Feb 20 13:07:54.915: INFO: Created: latency-svc-6hxpz
Feb 20 13:07:54.915: INFO: Got endpoints: latency-svc-6hxpz [3.725267746s]
Feb 20 13:07:55.217: INFO: Created: latency-svc-j6hmq
Feb 20 13:07:55.244: INFO: Got endpoints: latency-svc-j6hmq [3.751600026s]
Feb 20 13:07:55.387: INFO: Created: latency-svc-nwznw
Feb 20 13:07:55.411: INFO: Got endpoints: latency-svc-nwznw [3.524077617s]
Feb 20 13:07:55.646: INFO: Created: latency-svc-8z2cs
Feb 20 13:07:55.680: INFO: Got endpoints: latency-svc-8z2cs [3.714313594s]
Feb 20 13:07:55.889: INFO: Created: latency-svc-7v9tm
Feb 20 13:07:55.945: INFO: Got endpoints: latency-svc-7v9tm [3.527367277s]
Feb 20 13:07:56.302: INFO: Created: latency-svc-cqq99
Feb 20 13:07:56.322: INFO: Got endpoints: latency-svc-cqq99 [3.440470964s]
Feb 20 13:07:56.572: INFO: Created: latency-svc-2pcp7
Feb 20 13:07:56.614: INFO: Got endpoints: latency-svc-2pcp7 [3.700100266s]
Feb 20 13:07:56.860: INFO: Created: latency-svc-w9qpv
Feb 20 13:07:56.905: INFO: Got endpoints: latency-svc-w9qpv [3.733330853s]
Feb 20 13:07:57.228: INFO: Created: latency-svc-bm9g2
Feb 20 13:07:57.272: INFO: Got endpoints: latency-svc-bm9g2 [3.872028969s]
Feb 20 13:07:57.513: INFO: Created: latency-svc-n5f2z
Feb 20 13:07:57.531: INFO: Got endpoints: latency-svc-n5f2z [4.010753599s]
Feb 20 13:07:57.702: INFO: Created: latency-svc-qrk6h
Feb 20 13:07:57.705: INFO: Got endpoints: latency-svc-qrk6h [3.870979831s]
Feb 20 13:07:57.765: INFO: Created: latency-svc-86lsg
Feb 20 13:07:57.776: INFO: Got endpoints: latency-svc-86lsg [3.894194448s]
Feb 20 13:07:57.887: INFO: Created: latency-svc-l28c7
Feb 20 13:07:57.915: INFO: Got endpoints: latency-svc-l28c7 [3.801272181s]
Feb 20 13:07:58.131: INFO: Created: latency-svc-4n98r
Feb 20 13:07:58.151: INFO: Got endpoints: latency-svc-4n98r [3.891838497s]
Feb 20 13:07:58.393: INFO: Created: latency-svc-jfgsd
Feb 20 13:07:58.439: INFO: Created: latency-svc-6xkgw
Feb 20 13:07:58.445: INFO: Got endpoints: latency-svc-jfgsd [3.609789429s]
Feb 20 13:07:58.557: INFO: Got endpoints: latency-svc-6xkgw [3.641874671s]
Feb 20 13:07:58.618: INFO: Created: latency-svc-gf9hw
Feb 20 13:07:58.625: INFO: Got endpoints: latency-svc-gf9hw [3.381732792s]
Feb 20 13:07:58.796: INFO: Created: latency-svc-cprh4
Feb 20 13:07:58.809: INFO: Got endpoints: latency-svc-cprh4 [3.398087114s]
Feb 20 13:07:58.910: INFO: Created: latency-svc-75mvm
Feb 20 13:07:59.048: INFO: Got endpoints: latency-svc-75mvm [3.367533384s]
Feb 20 13:07:59.064: INFO: Created: latency-svc-vnvd8
Feb 20 13:07:59.166: INFO: Got endpoints: latency-svc-vnvd8 [3.221160826s]
Feb 20 13:07:59.485: INFO: Created: latency-svc-vg2jt
Feb 20 13:07:59.646: INFO: Got endpoints: latency-svc-vg2jt [3.323161488s]
Feb 20 13:08:00.132: INFO: Created: latency-svc-hmjwh
Feb 20 13:08:00.168: INFO: Got endpoints: latency-svc-hmjwh [3.553622757s]
Feb 20 13:08:00.297: INFO: Created: latency-svc-xpc87
Feb 20 13:08:00.328: INFO: Got endpoints: latency-svc-xpc87 [3.422438942s]
Feb 20 13:08:00.391: INFO: Created: latency-svc-8zk7w
Feb 20 13:08:00.391: INFO: Got endpoints: latency-svc-8zk7w [3.11898115s]
Feb 20 13:08:00.499: INFO: Created: latency-svc-c99sz
Feb 20 13:08:00.548: INFO: Got endpoints: latency-svc-c99sz [3.016551178s]
Feb 20 13:08:00.681: INFO: Created: latency-svc-msrhx
Feb 20 13:08:00.696: INFO: Got endpoints: latency-svc-msrhx [2.991502953s]
Feb 20 13:08:00.741: INFO: Created: latency-svc-6qgqm
Feb 20 13:08:00.756: INFO: Got endpoints: latency-svc-6qgqm [2.980118582s]
Feb 20 13:08:00.906: INFO: Created: latency-svc-7lgtv
Feb 20 13:08:00.923: INFO: Got endpoints: latency-svc-7lgtv [3.007732778s]
Feb 20 13:08:01.227: INFO: Created: latency-svc-gvkc8
Feb 20 13:08:01.256: INFO: Got endpoints: latency-svc-gvkc8 [3.105395064s]
Feb 20 13:08:01.362: INFO: Created: latency-svc-zw48k
Feb 20 13:08:01.417: INFO: Got endpoints: latency-svc-zw48k [2.971703789s]
Feb 20 13:08:01.549: INFO: Created: latency-svc-9lscp
Feb 20 13:08:01.569: INFO: Got endpoints: latency-svc-9lscp [3.011159331s]
Feb 20 13:08:01.717: INFO: Created: latency-svc-xj6tm
Feb 20 13:08:01.752: INFO: Got endpoints: latency-svc-xj6tm [3.126414677s]
Feb 20 13:08:01.956: INFO: Created: latency-svc-d5p5p
Feb 20 13:08:02.017: INFO: Got endpoints: latency-svc-d5p5p [3.208146966s]
Feb 20 13:08:02.164: INFO: Created: latency-svc-hc5rq
Feb 20 13:08:02.219: INFO: Got endpoints: latency-svc-hc5rq [3.170872619s]
Feb 20 13:08:02.256: INFO: Created: latency-svc-hb4qt
Feb 20 13:08:02.433: INFO: Got endpoints: latency-svc-hb4qt [3.266018598s]
Feb 20 13:08:02.477: INFO: Created: latency-svc-jmfnk
Feb 20 13:08:02.494: INFO: Got endpoints: latency-svc-jmfnk [2.847764585s]
Feb 20 13:08:02.784: INFO: Created: latency-svc-q4tqm
Feb 20 13:08:02.784: INFO: Got endpoints: latency-svc-q4tqm [2.61593489s]
Feb 20 13:08:02.927: INFO: Created: latency-svc-6nhmr
Feb 20 13:08:02.934: INFO: Got endpoints: latency-svc-6nhmr [2.605509727s]
Feb 20 13:08:02.987: INFO: Created: latency-svc-s9cpk
Feb 20 13:08:03.132: INFO: Got endpoints: latency-svc-s9cpk [2.740531119s]
Feb 20 13:08:03.394: INFO: Created: latency-svc-gmvcm
Feb 20 13:08:03.396: INFO: Got endpoints: latency-svc-gmvcm [2.847421716s]
Feb 20 13:08:03.594: INFO: Created: latency-svc-tvdjj
Feb 20 13:08:03.627: INFO: Got endpoints: latency-svc-tvdjj [2.930246834s]
Feb 20 13:08:03.851: INFO: Created: latency-svc-r2p6d
Feb 20 13:08:03.851: INFO: Got endpoints: latency-svc-r2p6d [3.095402525s]
Feb 20 13:08:04.071: INFO: Created: latency-svc-hmkwq
Feb 20 13:08:04.131: INFO: Got endpoints: latency-svc-hmkwq [3.207397204s]
Feb 20 13:08:04.312: INFO: Created: latency-svc-sbvqw
Feb 20 13:08:04.332: INFO: Got endpoints: latency-svc-sbvqw [3.075006007s]
Feb 20 13:08:04.593: INFO: Created: latency-svc-w4rkv
Feb 20 13:08:04.605: INFO: Got endpoints: latency-svc-w4rkv [3.18843645s]
Feb 20 13:08:04.794: INFO: Created: latency-svc-r26gz
Feb 20 13:08:04.805: INFO: Got endpoints: latency-svc-r26gz [3.236607843s]
Feb 20 13:08:04.864: INFO: Created: latency-svc-5brsl
Feb 20 13:08:04.874: INFO: Got endpoints: latency-svc-5brsl [3.122169512s]
Feb 20 13:08:05.088: INFO: Created: latency-svc-gkc8p
Feb 20 13:08:05.100: INFO: Got endpoints: latency-svc-gkc8p [3.08247377s]
Feb 20 13:08:05.286: INFO: Created: latency-svc-6j6gp
Feb 20 13:08:05.300: INFO: Got endpoints: latency-svc-6j6gp [3.080675287s]
Feb 20 13:08:05.347: INFO: Created: latency-svc-s8gm4
Feb 20 13:08:05.447: INFO: Got endpoints: latency-svc-s8gm4 [3.01417489s]
Feb 20 13:08:05.486: INFO: Created: latency-svc-5ff9t
Feb 20 13:08:05.513: INFO: Got endpoints: latency-svc-5ff9t [3.019365989s]
Feb 20 13:08:05.682: INFO: Created: latency-svc-7zvdv
Feb 20 13:08:05.700: INFO: Got endpoints: latency-svc-7zvdv [2.915999447s]
Feb 20 13:08:05.809: INFO: Created: latency-svc-kpfq5
Feb 20 13:08:05.895: INFO: Got endpoints: latency-svc-kpfq5 [2.961291703s]
Feb 20 13:08:05.941: INFO: Created: latency-svc-xqv2d
Feb 20 13:08:05.967: INFO: Got endpoints: latency-svc-xqv2d [2.834530209s]
Feb 20 13:08:06.128: INFO: Created: latency-svc-zhcx2
Feb 20 13:08:06.143: INFO: Got endpoints: latency-svc-zhcx2 [2.747411233s]
Feb 20 13:08:06.291: INFO: Created: latency-svc-ctm8j
Feb 20 13:08:06.316: INFO: Got endpoints: latency-svc-ctm8j [2.689208787s]
Feb 20 13:08:06.489: INFO: Created: latency-svc-l97fw
Feb 20 13:08:06.504: INFO: Got endpoints: latency-svc-l97fw [2.652396335s]
Feb 20 13:08:06.557: INFO: Created: latency-svc-vh4cv
Feb 20 13:08:06.678: INFO: Got endpoints: latency-svc-vh4cv [2.547188885s]
Feb 20 13:08:06.763: INFO: Created: latency-svc-sr5vg
Feb 20 13:08:06.893: INFO: Got endpoints: latency-svc-sr5vg [2.561368943s]
Feb 20 13:08:06.934: INFO: Created: latency-svc-xbstn
Feb 20 13:08:07.078: INFO: Got endpoints: latency-svc-xbstn [2.472860851s]
Feb 20 13:08:07.086: INFO: Created: latency-svc-scwdx
Feb 20 13:08:07.100: INFO: Got endpoints: latency-svc-scwdx [2.294264903s]
Feb 20 13:08:07.316: INFO: Created: latency-svc-g96bs
Feb 20 13:08:07.327: INFO: Got endpoints: latency-svc-g96bs [2.452416542s]
Feb 20 13:08:07.814: INFO: Created: latency-svc-frc2j
Feb 20 13:08:07.873: INFO: Got endpoints: latency-svc-frc2j [2.772668122s]
Feb 20 13:08:08.400: INFO: Created: latency-svc-f85xt
Feb 20 13:08:08.408: INFO: Got endpoints: latency-svc-f85xt [3.107810889s]
Feb 20 13:08:08.605: INFO: Created: latency-svc-kmtzg
Feb 20 13:08:08.767: INFO: Got endpoints: latency-svc-kmtzg [3.318411251s]
Feb 20 13:08:08.833: INFO: Created: latency-svc-l5nhj
Feb 20 13:08:08.945: INFO: Got endpoints: latency-svc-l5nhj [3.431762845s]
Feb 20 13:08:08.999: INFO: Created: latency-svc-td8qd
Feb 20 13:08:08.999: INFO: Got endpoints: latency-svc-td8qd [3.298491134s]
Feb 20 13:08:09.123: INFO: Created: latency-svc-7hd7z
Feb 20 13:08:09.152: INFO: Got endpoints: latency-svc-7hd7z [3.256873077s]
Feb 20 13:08:09.324: INFO: Created: latency-svc-v68k4
Feb 20 13:08:09.346: INFO: Got endpoints: latency-svc-v68k4 [3.379198938s]
Feb 20 13:08:09.412: INFO: Created: latency-svc-j9vnn
Feb 20 13:08:09.551: INFO: Got endpoints: latency-svc-j9vnn [3.407938824s]
Feb 20 13:08:09.599: INFO: Created: latency-svc-9vjh6
Feb 20 13:08:09.619: INFO: Got endpoints: latency-svc-9vjh6 [3.30269314s]
Feb 20 13:08:09.770: INFO: Created: latency-svc-sbm47
Feb 20 13:08:09.787: INFO: Got endpoints: latency-svc-sbm47 [3.283054162s]
Feb 20 13:08:09.829: INFO: Created: latency-svc-4x8f8
Feb 20 13:08:09.924: INFO: Got endpoints: latency-svc-4x8f8 [3.245481207s]
Feb 20 13:08:09.953: INFO: Created: latency-svc-gbc92
Feb 20 13:08:09.965: INFO: Got endpoints: latency-svc-gbc92 [3.071877584s]
Feb 20 13:08:10.160: INFO: Created: latency-svc-8t69m
Feb 20 13:08:10.179: INFO: Got endpoints: latency-svc-8t69m [3.100198497s]
Feb 20 13:08:10.218: INFO: Created: latency-svc-4rq7m
Feb 20 13:08:10.232: INFO: Got endpoints: latency-svc-4rq7m [3.132101034s]
Feb 20 13:08:10.346: INFO: Created: latency-svc-j8qsf
Feb 20 13:08:10.399: INFO: Got endpoints: latency-svc-j8qsf [3.071737882s]
Feb 20 13:08:10.454: INFO: Created: latency-svc-p64rp
Feb 20 13:08:10.590: INFO: Got endpoints: latency-svc-p64rp [2.717279655s]
Feb 20 13:08:10.620: INFO: Created: latency-svc-2sl9f
Feb 20 13:08:10.651: INFO: Got endpoints: latency-svc-2sl9f [2.242978184s]
Feb 20 13:08:10.835: INFO: Created: latency-svc-xdcn9
Feb 20 13:08:10.845: INFO: Got endpoints: latency-svc-xdcn9 [2.078043621s]
Feb 20 13:08:11.131: INFO: Created: latency-svc-dvpr2
Feb 20 13:08:11.208: INFO: Got endpoints: latency-svc-dvpr2 [2.262903557s]
Feb 20 13:08:11.367: INFO: Created: latency-svc-694g8
Feb 20 13:08:11.414: INFO: Got endpoints: latency-svc-694g8 [2.415026149s]
Feb 20 13:08:11.430: INFO: Created: latency-svc-jgjjg
Feb 20 13:08:11.563: INFO: Got endpoints: latency-svc-jgjjg [2.410438304s]
Feb 20 13:08:11.602: INFO: Created: latency-svc-ln6np
Feb 20 13:08:11.649: INFO: Created: latency-svc-8bgxp
Feb 20 13:08:11.652: INFO: Got endpoints: latency-svc-ln6np [2.305860548s]
Feb 20 13:08:11.803: INFO: Got endpoints: latency-svc-8bgxp [2.251938304s]
Feb 20 13:08:11.836: INFO: Created: latency-svc-tc24v
Feb 20 13:08:11.864: INFO: Got endpoints: latency-svc-tc24v [2.244528782s]
Feb 20 13:08:11.975: INFO: Created: latency-svc-vrzt7
Feb 20 13:08:11.998: INFO: Got endpoints: latency-svc-vrzt7 [2.210439862s]
Feb 20 13:08:12.053: INFO: Created: latency-svc-f2smt
Feb 20 13:08:12.153: INFO: Got endpoints: latency-svc-f2smt [2.228966832s]
Feb 20 13:08:12.195: INFO: Created: latency-svc-9j9lz
Feb 20 13:08:12.240: INFO: Got endpoints: latency-svc-9j9lz [2.273978198s]
Feb 20 13:08:12.409: INFO: Created: latency-svc-cxcdv
Feb 20 13:08:12.424: INFO: Got endpoints: latency-svc-cxcdv [2.24498602s]
Feb 20 13:08:12.507: INFO: Created: latency-svc-hcd9c
Feb 20 13:08:12.622: INFO: Got endpoints: latency-svc-hcd9c [2.390008929s]
Feb 20 13:08:12.685: INFO: Created: latency-svc-qnzvf
Feb 20 13:08:12.697: INFO: Got endpoints: latency-svc-qnzvf [2.297843452s]
Feb 20 13:08:12.862: INFO: Created: latency-svc-trxs5
Feb 20 13:08:12.890: INFO: Got endpoints: latency-svc-trxs5 [2.299191222s]
Feb 20 13:08:13.015: INFO: Created: latency-svc-tpxtt
Feb 20 13:08:13.033: INFO: Got endpoints: latency-svc-tpxtt [2.381659985s]
Feb 20 13:08:13.077: INFO: Created: latency-svc-8v8dq
Feb 20 13:08:13.089: INFO: Got endpoints: latency-svc-8v8dq [2.243450759s]
Feb 20 13:08:13.195: INFO: Created: latency-svc-tvfl9
Feb 20 13:08:13.214: INFO: Got endpoints: latency-svc-tvfl9 [2.006228377s]
Feb 20 13:08:13.279: INFO: Created: latency-svc-gbssr
Feb 20 13:08:13.450: INFO: Got endpoints: latency-svc-gbssr [2.035071828s]
Feb 20 13:08:13.472: INFO: Created: latency-svc-vcmbg
Feb 20 13:08:13.504: INFO: Got endpoints: latency-svc-vcmbg [1.940464317s]
Feb 20 13:08:13.546: INFO: Created: latency-svc-8p7cl
Feb 20 13:08:13.705: INFO: Got endpoints: latency-svc-8p7cl [2.052944251s]
Feb 20 13:08:13.723: INFO: Created: latency-svc-q8ff9
Feb 20 13:08:13.766: INFO: Got endpoints: latency-svc-q8ff9 [262.08385ms]
Feb 20 13:08:13.916: INFO: Created: latency-svc-xhxfw
Feb 20 13:08:13.943: INFO: Got endpoints: latency-svc-xhxfw [2.139775872s]
Feb 20 13:08:14.103: INFO: Created: latency-svc-bq4fp
Feb 20 13:08:14.114: INFO: Got endpoints: latency-svc-bq4fp [2.250217006s]
Feb 20 13:08:14.178: INFO: Created: latency-svc-cwqlh
Feb 20 13:08:14.306: INFO: Got endpoints: latency-svc-cwqlh [2.308533957s]
Feb 20 13:08:14.375: INFO: Created: latency-svc-gv2c5
Feb 20 13:08:14.403: INFO: Got endpoints: latency-svc-gv2c5 [2.249698514s]
Feb 20 13:08:14.572: INFO: Created: latency-svc-nhj2l
Feb 20 13:08:14.684: INFO: Got endpoints: latency-svc-nhj2l [2.44453038s]
Feb 20 13:08:14.695: INFO: Created: latency-svc-4pgnj
Feb 20 13:08:14.716: INFO: Got endpoints: latency-svc-4pgnj [2.292198865s]
Feb 20 13:08:14.749: INFO: Created: latency-svc-hwwct
Feb 20 13:08:14.757: INFO: Got endpoints: latency-svc-hwwct [2.13461341s]
Feb 20 13:08:14.941: INFO: Created: latency-svc-jd9cd
Feb 20 13:08:14.971: INFO: Got endpoints: latency-svc-jd9cd [2.274064349s]
Feb 20 13:08:15.163: INFO: Created: latency-svc-2svjq
Feb 20 13:08:15.180: INFO: Got endpoints: latency-svc-2svjq [2.289620566s]
Feb 20 13:08:15.231: INFO: Created: latency-svc-r7wl9
Feb 20 13:08:15.246: INFO: Got endpoints: latency-svc-r7wl9 [2.212560004s]
Feb 20 13:08:15.379: INFO: Created: latency-svc-49zd8
Feb 20 13:08:15.387: INFO: Got endpoints: latency-svc-49zd8 [2.298114856s]
Feb 20 13:08:15.425: INFO: Created: latency-svc-c4c9b
Feb 20 13:08:15.446: INFO: Got endpoints: latency-svc-c4c9b [2.230994098s]
Feb 20 13:08:15.602: INFO: Created: latency-svc-bt5kb
Feb 20 13:08:15.606: INFO: Got endpoints: latency-svc-bt5kb [2.155700373s]
Feb 20 13:08:15.668: INFO: Created: latency-svc-75wbb
Feb 20 13:08:15.778: INFO: Got endpoints: latency-svc-75wbb [2.072496021s]
Feb 20 13:08:15.797: INFO: Created: latency-svc-vr9fg
Feb 20 13:08:15.826: INFO: Got endpoints: latency-svc-vr9fg [2.059440589s]
Feb 20 13:08:15.969: INFO: Created: latency-svc-dsvxh
Feb 20 13:08:16.001: INFO: Got endpoints: latency-svc-dsvxh [2.057636948s]
Feb 20 13:08:16.060: INFO: Created: latency-svc-tnjgv
Feb 20 13:08:16.154: INFO: Got endpoints: latency-svc-tnjgv [2.039988178s]
Feb 20 13:08:16.180: INFO: Created: latency-svc-vlrxx
Feb 20 13:08:16.185: INFO: Got endpoints: latency-svc-vlrxx [1.878606107s]
Feb 20 13:08:16.249: INFO: Created: latency-svc-m2wjj
Feb 20 13:08:16.369: INFO: Got endpoints: latency-svc-m2wjj [1.96623284s]
Feb 20 13:08:16.681: INFO: Created: latency-svc-9g4qk
Feb 20 13:08:16.703: INFO: Created: latency-svc-thp9g
Feb 20 13:08:16.734: INFO: Got endpoints: latency-svc-thp9g [2.017726145s]
Feb 20 13:08:16.756: INFO: Got endpoints: latency-svc-9g4qk [2.071268343s]
Feb 20 13:08:16.905: INFO: Created: latency-svc-slkgr
Feb 20 13:08:16.929: INFO: Got endpoints: latency-svc-slkgr [2.172133624s]
Feb 20 13:08:17.099: INFO: Created: latency-svc-6pjp8
Feb 20 13:08:17.108: INFO: Got endpoints: latency-svc-6pjp8 [2.13684408s]
Feb 20 13:08:17.155: INFO: Created: latency-svc-mlbtj
Feb 20 13:08:17.172: INFO: Got endpoints: latency-svc-mlbtj [1.992084445s]
Feb 20 13:08:17.317: INFO: Created: latency-svc-w7wgg
Feb 20 13:08:17.335: INFO: Got endpoints: latency-svc-w7wgg [2.089773694s]
Feb 20 13:08:17.531: INFO: Created: latency-svc-28kvf
Feb 20 13:08:17.552: INFO: Got endpoints: latency-svc-28kvf [2.165100775s]
Feb 20 13:08:17.761: INFO: Created: latency-svc-p4zvn
Feb 20 13:08:18.045: INFO: Created: latency-svc-745sx
Feb 20 13:08:18.053: INFO: Got endpoints: latency-svc-p4zvn [2.607801908s]
Feb 20 13:08:18.063: INFO: Got endpoints: latency-svc-745sx [2.457281277s]
Feb 20 13:08:18.351: INFO: Created: latency-svc-48b6g
Feb 20 13:08:18.375: INFO: Got endpoints: latency-svc-48b6g [2.596774625s]
Feb 20 13:08:18.430: INFO: Created: latency-svc-qpjgg
Feb 20 13:08:18.582: INFO: Got endpoints: latency-svc-qpjgg [2.755666614s]
Feb 20 13:08:18.638: INFO: Created: latency-svc-pr4ld
Feb 20 13:08:18.796: INFO: Got endpoints: latency-svc-pr4ld [2.79498756s]
Feb 20 13:08:18.809: INFO: Created: latency-svc-t2xwt
Feb 20 13:08:18.811: INFO: Got endpoints: latency-svc-t2xwt [2.656705969s]
Feb 20 13:08:19.059: INFO: Created: latency-svc-jkb2t
Feb 20 13:08:19.120: INFO: Got endpoints: latency-svc-jkb2t [2.93426581s]
Feb 20 13:08:19.573: INFO: Created: latency-svc-m9vnj
Feb 20 13:08:19.853: INFO: Got endpoints: latency-svc-m9vnj [3.483692357s]
Feb 20 13:08:19.933: INFO: Created: latency-svc-lfppp
Feb 20 13:08:20.254: INFO: Got endpoints: latency-svc-lfppp [3.519855461s]
Feb 20 13:08:20.300: INFO: Created: latency-svc-pw7mk
Feb 20 13:08:20.478: INFO: Got endpoints: latency-svc-pw7mk [3.721765258s]
Feb 20 13:08:20.510: INFO: Created: latency-svc-5p59w
Feb 20 13:08:20.569: INFO: Got endpoints: latency-svc-5p59w [3.639656813s]
Feb 20 13:08:20.802: INFO: Created: latency-svc-2dh9f
Feb 20 13:08:20.813: INFO: Got endpoints: latency-svc-2dh9f [3.704872882s]
Feb 20 13:08:20.813: INFO: Latencies: [262.08385ms 264.900616ms 461.936989ms 500.583677ms 713.329061ms 989.783475ms 1.070152055s 1.265064185s 1.524695451s 1.582581723s 1.810820485s 1.878606107s 1.940464317s 1.96623284s 1.992084445s 2.006228377s 2.017726145s 2.035071828s 2.039988178s 2.052944251s 2.057636948s 2.059440589s 2.068852127s 2.071268343s 2.072496021s 2.078043621s 2.089773694s 2.13461341s 2.13684408s 2.139775872s 2.155700373s 2.165100775s 2.172133624s 2.210439862s 2.212560004s 2.228966832s 2.230994098s 2.242978184s 2.243450759s 2.244528782s 2.24498602s 2.249698514s 2.250217006s 2.251759334s 2.251938304s 2.262903557s 2.273978198s 2.274064349s 2.289620566s 2.292198865s 2.294264903s 2.297843452s 2.298114856s 2.299191222s 2.305860548s 2.308533957s 2.381659985s 2.390008929s 2.410438304s 2.415026149s 2.44453038s 2.452416542s 2.457281277s 2.472860851s 2.483780263s 2.503727604s 2.547188885s 2.560368918s 2.561368943s 2.569056545s 2.596774625s 2.605509727s 2.607801908s 2.608027748s 2.61593489s 2.652396335s 2.656705969s 2.687911314s 2.689208787s 2.711369831s 2.717279655s 2.736682005s 2.740531119s 2.747411233s 2.755666614s 2.75998239s 2.761052416s 2.772668122s 2.794432396s 2.79498756s 2.809967937s 2.834530209s 2.847421716s 2.847764585s 2.86462772s 2.915999447s 2.930246834s 2.93426581s 2.961291703s 2.971703789s 2.980118582s 2.991502953s 3.007732778s 3.011159331s 3.01417489s 3.016551178s 3.019365989s 3.071737882s 3.071877584s 3.075006007s 3.080675287s 3.08247377s 3.095402525s 3.100198497s 3.105395064s 3.107810889s 3.11898115s 3.122169512s 3.126414677s 3.132101034s 3.170872619s 3.18843645s 3.207397204s 3.208146966s 3.221160826s 3.236607843s 3.245481207s 3.256873077s 3.263508874s 3.266018598s 3.283054162s 3.298491134s 3.30269314s 3.318411251s 3.323161488s 3.367533384s 3.379198938s 3.381732792s 3.398017365s 3.398087114s 3.407938824s 3.420705441s 3.422438942s 3.431762845s 3.440470964s 3.448888634s 3.463324922s 3.483692357s 3.512571117s 3.519855461s 3.524077617s 3.527367277s 3.553622757s 3.572137261s 3.609789429s 3.639656813s 3.641874671s 3.663857363s 3.700100266s 3.704872882s 3.714313594s 3.721765258s 3.725267746s 3.733330853s 3.751600026s 3.757332109s 3.801272181s 3.802678054s 3.814398135s 3.841083474s 3.870979831s 3.872028969s 3.891838497s 3.894194448s 3.955669563s 3.986703326s 4.006150672s 4.010753599s 4.035627636s 4.038233745s 4.101115323s 4.11317443s 4.172628271s 4.183613755s 4.196915211s 4.225660785s 4.227476921s 4.229133299s 4.243342227s 4.294989238s 4.314817528s 4.327462242s 4.46063443s 4.502861043s 4.5093816s 4.512376037s 4.515165865s 4.547264244s 4.60212355s 4.607279335s]
Feb 20 13:08:20.813: INFO: 50 %ile: 2.980118582s
Feb 20 13:08:20.813: INFO: 90 %ile: 4.101115323s
Feb 20 13:08:20.813: INFO: 99 %ile: 4.60212355s
Feb 20 13:08:20.813: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 13:08:20.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-9l48z" for this suite.
Feb 20 13:09:37.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 13:09:37.090: INFO: namespace: e2e-tests-svc-latency-9l48z, resource: bindings, ignored listing per whitelist
Feb 20 13:09:37.205: INFO: namespace e2e-tests-svc-latency-9l48z deletion completed in 1m16.215229727s

• [SLOW TEST:129.096 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 13:09:37.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-3f03fc03-53e2-11ea-bcb7-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 20 13:09:37.330: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3f04974c-53e2-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-m4vdj" to be "success or failure"
Feb 20 13:09:37.418: INFO: Pod "pod-projected-secrets-3f04974c-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 87.646379ms
Feb 20 13:09:39.474: INFO: Pod "pod-projected-secrets-3f04974c-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143885088s
Feb 20 13:09:41.525: INFO: Pod "pod-projected-secrets-3f04974c-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194374003s
Feb 20 13:09:43.857: INFO: Pod "pod-projected-secrets-3f04974c-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526971686s
Feb 20 13:09:45.873: INFO: Pod "pod-projected-secrets-3f04974c-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542251678s
Feb 20 13:09:47.885: INFO: Pod "pod-projected-secrets-3f04974c-53e2-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.554333439s
STEP: Saw pod success
Feb 20 13:09:47.885: INFO: Pod "pod-projected-secrets-3f04974c-53e2-11ea-bcb7-0242ac110008" satisfied condition "success or failure"
Feb 20 13:09:47.891: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-3f04974c-53e2-11ea-bcb7-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 20 13:09:49.156: INFO: Waiting for pod pod-projected-secrets-3f04974c-53e2-11ea-bcb7-0242ac110008 to disappear
Feb 20 13:09:49.172: INFO: Pod pod-projected-secrets-3f04974c-53e2-11ea-bcb7-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 13:09:49.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m4vdj" for this suite.
Feb 20 13:09:57.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 13:09:57.329: INFO: namespace: e2e-tests-projected-m4vdj, resource: bindings, ignored listing per whitelist
Feb 20 13:09:57.572: INFO: namespace e2e-tests-projected-m4vdj deletion completed in 8.39592542s

• [SLOW TEST:20.367 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 13:09:57.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-gbhxb/configmap-test-4b4b8810-53e2-11ea-bcb7-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 20 13:09:57.952: INFO: Waiting up to 5m0s for pod "pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008" in namespace "e2e-tests-configmap-gbhxb" to be "success or failure"
Feb 20 13:09:57.972: INFO: Pod "pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.443964ms
Feb 20 13:10:01.007: INFO: Pod "pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.054204917s
Feb 20 13:10:03.063: INFO: Pod "pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.110479828s
Feb 20 13:10:05.080: INFO: Pod "pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.127170768s
Feb 20 13:10:07.601: INFO: Pod "pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.64851847s
Feb 20 13:10:09.616: INFO: Pod "pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.66367579s
Feb 20 13:10:11.899: INFO: Pod "pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.947019591s
Feb 20 13:10:13.926: INFO: Pod "pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.973593248s
STEP: Saw pod success
Feb 20 13:10:13.926: INFO: Pod "pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008" satisfied condition "success or failure"
Feb 20 13:10:13.931: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008 container env-test: 
STEP: delete the pod
Feb 20 13:10:14.763: INFO: Waiting for pod pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008 to disappear
Feb 20 13:10:15.204: INFO: Pod pod-configmaps-4b4d69b9-53e2-11ea-bcb7-0242ac110008 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 13:10:15.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gbhxb" for this suite.
Feb 20 13:10:21.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 13:10:21.510: INFO: namespace: e2e-tests-configmap-gbhxb, resource: bindings, ignored listing per whitelist
Feb 20 13:10:21.690: INFO: namespace e2e-tests-configmap-gbhxb deletion completed in 6.473154886s

• [SLOW TEST:24.118 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 13:10:21.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-59af421c-53e2-11ea-bcb7-0242ac110008
STEP: Creating secret with name s-test-opt-upd-59af42d2-53e2-11ea-bcb7-0242ac110008
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-59af421c-53e2-11ea-bcb7-0242ac110008
STEP: Updating secret s-test-opt-upd-59af42d2-53e2-11ea-bcb7-0242ac110008
STEP: Creating secret with name s-test-opt-create-59af4302-53e2-11ea-bcb7-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 13:11:48.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mftcl" for this suite.
Feb 20 13:12:13.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 13:12:13.212: INFO: namespace: e2e-tests-projected-mftcl, resource: bindings, ignored listing per whitelist
Feb 20 13:12:13.214: INFO: namespace e2e-tests-projected-mftcl deletion completed in 24.218452766s

• [SLOW TEST:111.524 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 13:12:13.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 20 13:12:13.422: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pvvfm,SelfLink:/api/v1/namespaces/e2e-tests-watch-pvvfm/configmaps/e2e-watch-test-label-changed,UID:9c0bbb6e-53e2-11ea-a994-fa163e34d433,ResourceVersion:22318253,Generation:0,CreationTimestamp:2020-02-20 13:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 20 13:12:13.422: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pvvfm,SelfLink:/api/v1/namespaces/e2e-tests-watch-pvvfm/configmaps/e2e-watch-test-label-changed,UID:9c0bbb6e-53e2-11ea-a994-fa163e34d433,ResourceVersion:22318254,Generation:0,CreationTimestamp:2020-02-20 13:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 20 13:12:13.422: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pvvfm,SelfLink:/api/v1/namespaces/e2e-tests-watch-pvvfm/configmaps/e2e-watch-test-label-changed,UID:9c0bbb6e-53e2-11ea-a994-fa163e34d433,ResourceVersion:22318255,Generation:0,CreationTimestamp:2020-02-20 13:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 20 13:12:23.506: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pvvfm,SelfLink:/api/v1/namespaces/e2e-tests-watch-pvvfm/configmaps/e2e-watch-test-label-changed,UID:9c0bbb6e-53e2-11ea-a994-fa163e34d433,ResourceVersion:22318269,Generation:0,CreationTimestamp:2020-02-20 13:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 20 13:12:23.506: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pvvfm,SelfLink:/api/v1/namespaces/e2e-tests-watch-pvvfm/configmaps/e2e-watch-test-label-changed,UID:9c0bbb6e-53e2-11ea-a994-fa163e34d433,ResourceVersion:22318270,Generation:0,CreationTimestamp:2020-02-20 13:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 20 13:12:23.506: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pvvfm,SelfLink:/api/v1/namespaces/e2e-tests-watch-pvvfm/configmaps/e2e-watch-test-label-changed,UID:9c0bbb6e-53e2-11ea-a994-fa163e34d433,ResourceVersion:22318271,Generation:0,CreationTimestamp:2020-02-20 13:12:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 13:12:23.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-pvvfm" for this suite.
Feb 20 13:12:31.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 13:12:31.657: INFO: namespace: e2e-tests-watch-pvvfm, resource: bindings, ignored listing per whitelist
Feb 20 13:12:31.701: INFO: namespace e2e-tests-watch-pvvfm deletion completed in 8.187027176s

• [SLOW TEST:18.487 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 13:12:31.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 20 13:12:31.984: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a71eb837-53e2-11ea-bcb7-0242ac110008" in namespace "e2e-tests-projected-vwq8g" to be "success or failure"
Feb 20 13:12:32.001: INFO: Pod "downwardapi-volume-a71eb837-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.132512ms
Feb 20 13:12:34.025: INFO: Pod "downwardapi-volume-a71eb837-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040372347s
Feb 20 13:12:36.038: INFO: Pod "downwardapi-volume-a71eb837-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053428401s
Feb 20 13:12:38.059: INFO: Pod "downwardapi-volume-a71eb837-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074417837s
Feb 20 13:12:40.073: INFO: Pod "downwardapi-volume-a71eb837-53e2-11ea-bcb7-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08894911s
Feb 20 13:12:42.153: INFO: Pod "downwardapi-volume-a71eb837-53e2-11ea-bcb7-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168835859s
STEP: Saw pod success
Feb 20 13:12:42.153: INFO: Pod "downwardapi-volume-a71eb837-53e2-11ea-bcb7-0242ac110008" satisfied condition "success or failure"
Feb 20 13:12:42.161: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a71eb837-53e2-11ea-bcb7-0242ac110008 container client-container: 
STEP: delete the pod
Feb 20 13:12:42.623: INFO: Waiting for pod downwardapi-volume-a71eb837-53e2-11ea-bcb7-0242ac110008 to disappear
Feb 20 13:12:42.723: INFO: Pod downwardapi-volume-a71eb837-53e2-11ea-bcb7-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 13:12:42.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vwq8g" for this suite.
Feb 20 13:12:48.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 13:12:48.938: INFO: namespace: e2e-tests-projected-vwq8g, resource: bindings, ignored listing per whitelist
Feb 20 13:12:48.979: INFO: namespace e2e-tests-projected-vwq8g deletion completed in 6.249296815s

• [SLOW TEST:17.277 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 13:12:48.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 20 13:12:49.150: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 20 13:12:49.168: INFO: Number of nodes with available pods: 0
Feb 20 13:12:49.168: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:12:50.391: INFO: Number of nodes with available pods: 0
Feb 20 13:12:50.391: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:12:51.699: INFO: Number of nodes with available pods: 0
Feb 20 13:12:51.699: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:12:52.250: INFO: Number of nodes with available pods: 0
Feb 20 13:12:52.250: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:12:53.191: INFO: Number of nodes with available pods: 0
Feb 20 13:12:53.191: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:12:55.666: INFO: Number of nodes with available pods: 0
Feb 20 13:12:55.667: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:12:56.204: INFO: Number of nodes with available pods: 0
Feb 20 13:12:56.204: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:12:57.211: INFO: Number of nodes with available pods: 0
Feb 20 13:12:57.211: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:12:58.199: INFO: Number of nodes with available pods: 0
Feb 20 13:12:58.199: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:12:59.192: INFO: Number of nodes with available pods: 1
Feb 20 13:12:59.192: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 20 13:12:59.312: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:00.343: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:01.345: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:02.345: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:03.741: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:04.342: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:05.345: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:06.349: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:06.349: INFO: Pod daemon-set-2g57s is not available
Feb 20 13:13:07.348: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:07.348: INFO: Pod daemon-set-2g57s is not available
Feb 20 13:13:08.345: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:08.345: INFO: Pod daemon-set-2g57s is not available
Feb 20 13:13:09.381: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:09.381: INFO: Pod daemon-set-2g57s is not available
Feb 20 13:13:10.338: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:10.338: INFO: Pod daemon-set-2g57s is not available
Feb 20 13:13:11.355: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:11.355: INFO: Pod daemon-set-2g57s is not available
Feb 20 13:13:12.336: INFO: Wrong image for pod: daemon-set-2g57s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 20 13:13:12.336: INFO: Pod daemon-set-2g57s is not available
Feb 20 13:13:13.343: INFO: Pod daemon-set-z9z99 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 20 13:13:13.362: INFO: Number of nodes with available pods: 0
Feb 20 13:13:13.362: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:13:14.394: INFO: Number of nodes with available pods: 0
Feb 20 13:13:14.394: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:13:15.379: INFO: Number of nodes with available pods: 0
Feb 20 13:13:15.379: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:13:16.392: INFO: Number of nodes with available pods: 0
Feb 20 13:13:16.392: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:13:17.389: INFO: Number of nodes with available pods: 0
Feb 20 13:13:17.390: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:13:18.385: INFO: Number of nodes with available pods: 0
Feb 20 13:13:18.385: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:13:19.400: INFO: Number of nodes with available pods: 0
Feb 20 13:13:19.400: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:13:20.385: INFO: Number of nodes with available pods: 0
Feb 20 13:13:20.385: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 20 13:13:21.380: INFO: Number of nodes with available pods: 1
Feb 20 13:13:21.380: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nzjwt, will wait for the garbage collector to delete the pods
Feb 20 13:13:21.498: INFO: Deleting DaemonSet.extensions daemon-set took: 22.326576ms
Feb 20 13:13:21.598: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.641703ms
Feb 20 13:13:32.631: INFO: Number of nodes with available pods: 0
Feb 20 13:13:32.631: INFO: Number of running nodes: 0, number of available pods: 0
Feb 20 13:13:32.640: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nzjwt/daemonsets","resourceVersion":"22318430"},"items":null}

Feb 20 13:13:32.657: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nzjwt/pods","resourceVersion":"22318430"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 13:13:32.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-nzjwt" for this suite.
Feb 20 13:13:39.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 13:13:39.089: INFO: namespace: e2e-tests-daemonsets-nzjwt, resource: bindings, ignored listing per whitelist
Feb 20 13:13:39.153: INFO: namespace e2e-tests-daemonsets-nzjwt deletion completed in 6.303636774s

• [SLOW TEST:50.173 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 20 13:13:39.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 20 13:13:39.321: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 20 13:13:39.335: INFO: Waiting for terminating namespaces to be deleted...
Feb 20 13:13:39.343: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 20 13:13:39.370: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 20 13:13:39.370: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 20 13:13:39.370: INFO: 	Container weave ready: true, restart count 0
Feb 20 13:13:39.370: INFO: 	Container weave-npc ready: true, restart count 0
Feb 20 13:13:39.370: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 20 13:13:39.370: INFO: 	Container coredns ready: true, restart count 0
Feb 20 13:13:39.370: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 20 13:13:39.370: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 20 13:13:39.370: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 20 13:13:39.370: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 20 13:13:39.370: INFO: 	Container coredns ready: true, restart count 0
Feb 20 13:13:39.370: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 20 13:13:39.370: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Feb 20 13:13:39.498: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 20 13:13:39.498: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 20 13:13:39.498: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 20 13:13:39.498: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Feb 20 13:13:39.498: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Feb 20 13:13:39.498: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 20 13:13:39.498: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 20 13:13:39.498: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cf5e0d6b-53e2-11ea-bcb7-0242ac110008.15f51e8964239611], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-p5k7d/filler-pod-cf5e0d6b-53e2-11ea-bcb7-0242ac110008 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cf5e0d6b-53e2-11ea-bcb7-0242ac110008.15f51e8a58df6911], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cf5e0d6b-53e2-11ea-bcb7-0242ac110008.15f51e8aefca9550], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cf5e0d6b-53e2-11ea-bcb7-0242ac110008.15f51e8b1bce46aa], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f51e8bbb62e904], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 20 13:13:50.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-p5k7d" for this suite.
Feb 20 13:13:59.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 20 13:13:59.696: INFO: namespace: e2e-tests-sched-pred-p5k7d, resource: bindings, ignored listing per whitelist
Feb 20 13:13:59.696: INFO: namespace e2e-tests-sched-pred-p5k7d deletion completed in 8.899427287s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:20.543 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSFeb 20 13:13:59.697: INFO: Running AfterSuite actions on all nodes
Feb 20 13:13:59.697: INFO: Running AfterSuite actions on node 1
Feb 20 13:13:59.697: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8815.993 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS