I0123 12:56:13.138151 8 e2e.go:243] Starting e2e run "1cc389e5-13c5-45fa-8a68-84eacb028761" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579784171 - Will randomize all specs Will run 215 of 4412 specs Jan 23 12:56:13.420: INFO: >>> kubeConfig: /root/.kube/config Jan 23 12:56:13.423: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 23 12:56:13.461: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 23 12:56:13.497: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 23 12:56:13.497: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 23 12:56:13.497: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 23 12:56:13.505: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 23 12:56:13.505: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 23 12:56:13.505: INFO: e2e test version: v1.15.7 Jan 23 12:56:13.506: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 12:56:13.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment Jan 23 12:56:13.731: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 23 12:56:13.788: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 23 12:56:18.801: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 23 12:56:24.817: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 23 12:56:26.825: INFO: Creating deployment "test-rollover-deployment" Jan 23 12:56:26.844: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 23 12:56:28.874: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 23 12:56:28.885: INFO: Ensure that both replica sets have 1 created replica Jan 23 12:56:28.892: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 23 12:56:28.901: INFO: Updating deployment test-rollover-deployment Jan 23 12:56:28.901: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 23 12:56:31.006: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 23 12:56:31.572: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 23 12:56:31.582: INFO: all replica sets need to contain the pod-template-hash label Jan 23 12:56:31.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380989, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:56:33.600: INFO: all replica sets need to contain the pod-template-hash label Jan 23 12:56:33.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380989, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:56:37.196: INFO: all replica sets need to contain the pod-template-hash label Jan 23 12:56:37.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380989, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:56:37.610: INFO: all replica sets need to contain the pod-template-hash label Jan 23 12:56:37.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380989, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:56:39.601: INFO: all replica sets need to contain the pod-template-hash label Jan 23 12:56:39.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380989, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:56:41.600: INFO: all replica sets need to contain the pod-template-hash label Jan 23 12:56:41.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381000, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:56:43.607: INFO: all replica sets need to contain the pod-template-hash label Jan 23 12:56:43.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381000, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:56:45.596: INFO: all replica sets need to contain the pod-template-hash label Jan 23 12:56:45.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381000, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:56:47.599: INFO: all replica sets need to contain the pod-template-hash label Jan 23 12:56:47.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381000, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:56:49.598: INFO: all replica sets need to contain the pod-template-hash label Jan 23 12:56:49.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715381000, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715380986, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 12:56:51.610: INFO: Jan 23 12:56:51.610: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 23 12:56:51.629: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3386,SelfLink:/apis/apps/v1/namespaces/deployment-3386/deployments/test-rollover-deployment,UID:d01a837b-cd6f-40a1-95d4-dabd5df9cde3,ResourceVersion:21554989,Generation:2,CreationTimestamp:2020-01-23 12:56:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-23 12:56:26 +0000 UTC 2020-01-23 12:56:26 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-23 12:56:50 +0000 UTC 2020-01-23 12:56:26 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 23 12:56:51.645: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3386,SelfLink:/apis/apps/v1/namespaces/deployment-3386/replicasets/test-rollover-deployment-854595fc44,UID:816bbc71-ff71-458b-a3bb-5a75d53eede1,ResourceVersion:21554979,Generation:2,CreationTimestamp:2020-01-23 12:56:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d01a837b-cd6f-40a1-95d4-dabd5df9cde3 0xc002408b37 0xc002408b38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 23 12:56:51.645: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 23 12:56:51.645: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3386,SelfLink:/apis/apps/v1/namespaces/deployment-3386/replicasets/test-rollover-controller,UID:5933e2cf-e931-4e77-a89e-b51bbb8c775e,ResourceVersion:21554988,Generation:2,CreationTimestamp:2020-01-23 12:56:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d01a837b-cd6f-40a1-95d4-dabd5df9cde3 0xc002408a67 0xc002408a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 23 12:56:51.645: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3386,SelfLink:/apis/apps/v1/namespaces/deployment-3386/replicasets/test-rollover-deployment-9b8b997cf,UID:4a0028af-2094-4e98-9436-073b94550679,ResourceVersion:21554939,Generation:2,CreationTimestamp:2020-01-23 12:56:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d01a837b-cd6f-40a1-95d4-dabd5df9cde3 0xc002408c00 0xc002408c01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 23 12:56:51.655: INFO: Pod "test-rollover-deployment-854595fc44-dnjfh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-dnjfh,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3386,SelfLink:/api/v1/namespaces/deployment-3386/pods/test-rollover-deployment-854595fc44-dnjfh,UID:b675dd72-688a-4bd6-bc84-bd6a1565ef76,ResourceVersion:21554962,Generation:0,CreationTimestamp:2020-01-23 12:56:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 816bbc71-ff71-458b-a3bb-5a75d53eede1 0xc0014047d7 0xc0014047d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-frr8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-frr8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-frr8c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001404840} {node.kubernetes.io/unreachable Exists NoExecute 0xc001404860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:56:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:56:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:56:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:56:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-23 12:56:29 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-23 12:56:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e517ceff9b85a30145adb08262eddb1ba28783bc0268f0148742142b1d34d9f5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 12:56:51.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3386" for this suite. Jan 23 12:56:59.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:56:59.847: INFO: namespace deployment-3386 deletion completed in 8.184180978s • [SLOW TEST:46.340 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 12:56:59.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 12:57:08.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7172" for this suite. Jan 23 12:57:14.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:57:14.601: INFO: namespace emptydir-wrapper-7172 deletion completed in 6.160146461s • [SLOW TEST:14.754 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 12:57:14.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-8229d9b6-26b4-4598-961c-02ee50731c08 STEP: Creating a pod to test consume configMaps Jan 23 12:57:14.717: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-41197d45-9c31-441d-8771-c1358fd13665" in namespace "projected-4996" to be "success or failure" Jan 23 12:57:14.724: INFO: Pod "pod-projected-configmaps-41197d45-9c31-441d-8771-c1358fd13665": Phase="Pending", Reason="", readiness=false. Elapsed: 7.121556ms Jan 23 12:57:16.733: INFO: Pod "pod-projected-configmaps-41197d45-9c31-441d-8771-c1358fd13665": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015644847s Jan 23 12:57:18.745: INFO: Pod "pod-projected-configmaps-41197d45-9c31-441d-8771-c1358fd13665": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027595541s Jan 23 12:57:20.755: INFO: Pod "pod-projected-configmaps-41197d45-9c31-441d-8771-c1358fd13665": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037278532s Jan 23 12:57:22.775: INFO: Pod "pod-projected-configmaps-41197d45-9c31-441d-8771-c1358fd13665": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057926524s Jan 23 12:57:24.792: INFO: Pod "pod-projected-configmaps-41197d45-9c31-441d-8771-c1358fd13665": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075078377s STEP: Saw pod success Jan 23 12:57:24.793: INFO: Pod "pod-projected-configmaps-41197d45-9c31-441d-8771-c1358fd13665" satisfied condition "success or failure" Jan 23 12:57:24.798: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-41197d45-9c31-441d-8771-c1358fd13665 container projected-configmap-volume-test: STEP: delete the pod Jan 23 12:57:24.933: INFO: Waiting for pod pod-projected-configmaps-41197d45-9c31-441d-8771-c1358fd13665 to disappear Jan 23 12:57:24.939: INFO: Pod pod-projected-configmaps-41197d45-9c31-441d-8771-c1358fd13665 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 12:57:24.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4996" for this suite. Jan 23 12:57:31.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:57:31.204: INFO: namespace projected-4996 deletion completed in 6.258096984s • [SLOW TEST:16.602 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 12:57:31.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-7209 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7209 to expose endpoints map[] Jan 23 12:57:31.351: INFO: Get endpoints failed (8.806634ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 23 12:57:32.360: INFO: successfully validated that service multi-endpoint-test in namespace services-7209 exposes endpoints map[] (1.018671044s elapsed) STEP: Creating pod pod1 in namespace services-7209 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7209 to expose endpoints map[pod1:[100]] Jan 23 12:57:36.595: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.216744334s elapsed, will retry) Jan 23 12:57:39.641: INFO: successfully validated that service multi-endpoint-test in namespace services-7209 exposes endpoints map[pod1:[100]] (7.263257955s elapsed) STEP: Creating pod pod2 in namespace services-7209 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7209 to expose endpoints map[pod1:[100] pod2:[101]] Jan 23 12:57:46.121: INFO: Unexpected endpoints: found map[2e3a5ece-e20b-4be7-85f9-9e8cd8b56ae7:[100]], expected map[pod1:[100] pod2:[101]] (6.473082238s elapsed, will retry) Jan 23 12:57:49.732: INFO: successfully validated that service multi-endpoint-test in namespace services-7209 exposes endpoints map[pod1:[100] pod2:[101]] (10.084465262s elapsed) STEP: Deleting pod pod1 in namespace services-7209 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7209 to expose endpoints map[pod2:[101]] Jan 23 12:57:49.849: INFO: successfully validated that service multi-endpoint-test in namespace services-7209 exposes endpoints map[pod2:[101]] (98.963988ms elapsed) STEP: Deleting pod pod2 in namespace services-7209 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7209 to expose endpoints map[] Jan 23 12:57:50.016: INFO: successfully validated that service multi-endpoint-test in namespace services-7209 exposes endpoints map[] (93.966874ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 12:57:50.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7209" for this suite. Jan 23 12:58:14.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:58:14.323: INFO: namespace services-7209 deletion completed in 24.188538852s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:43.119 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 12:58:14.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-5766 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5766 to expose endpoints map[] Jan 23 12:58:14.578: INFO: successfully validated that service endpoint-test2 in namespace services-5766 exposes endpoints map[] (10.969841ms elapsed) STEP: Creating pod pod1 in namespace services-5766 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5766 to expose endpoints map[pod1:[80]] Jan 23 12:58:18.773: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.091474033s elapsed, will retry) Jan 23 12:58:22.854: INFO: successfully validated that service endpoint-test2 in namespace services-5766 exposes endpoints map[pod1:[80]] (8.173096932s elapsed) STEP: Creating pod pod2 in namespace services-5766 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5766 to expose endpoints map[pod1:[80] pod2:[80]] Jan 23 12:58:27.139: INFO: Unexpected endpoints: found map[8cad2e97-58bd-44cb-a999-73a0fd6e73a9:[80]], expected map[pod1:[80] pod2:[80]] (4.234848045s elapsed, will retry) Jan 23 12:58:31.491: INFO: successfully validated that service endpoint-test2 in namespace services-5766 exposes endpoints map[pod1:[80] pod2:[80]] (8.586389531s elapsed) STEP: Deleting pod pod1 in namespace services-5766 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5766 to expose endpoints map[pod2:[80]] Jan 23 12:58:32.580: INFO: successfully validated that service endpoint-test2 in namespace services-5766 exposes endpoints map[pod2:[80]] (1.079677818s elapsed) STEP: Deleting pod pod2 in namespace services-5766 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5766 to expose endpoints map[] Jan 23 12:58:32.633: INFO: successfully validated that service endpoint-test2 in namespace services-5766 exposes endpoints map[] (39.952215ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 12:58:32.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5766" for this suite. Jan 23 12:58:56.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:58:56.162: INFO: namespace services-5766 deletion completed in 23.15136317s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:41.838 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 12:58:56.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 23 12:59:04.838: INFO: Successfully updated pod "labelsupdate18b494cc-0cc6-403d-827b-dd0ed8f3dfa7" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 12:59:08.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2106" for this suite. Jan 23 12:59:31.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:59:31.107: INFO: namespace projected-2106 deletion completed in 22.144647792s • [SLOW TEST:34.945 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 12:59:31.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 23 12:59:31.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9963' Jan 23 12:59:34.230: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 23 12:59:34.230: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jan 23 12:59:34.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-9963' Jan 23 12:59:34.453: INFO: stderr: "" Jan 23 12:59:34.454: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 12:59:34.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9963" for this suite. Jan 23 12:59:40.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 12:59:40.704: INFO: namespace kubectl-9963 deletion completed in 6.232539285s • [SLOW TEST:9.597 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 12:59:40.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 23 12:59:40.884: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 23 12:59:45.892: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 23 12:59:49.904: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 23 12:59:57.976: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8186,SelfLink:/apis/apps/v1/namespaces/deployment-8186/deployments/test-cleanup-deployment,UID:fd65d172-910f-418b-908f-b81325747ea9,ResourceVersion:21555523,Generation:1,CreationTimestamp:2020-01-23 12:59:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-23 12:59:50 +0000 UTC 2020-01-23 12:59:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-23 12:59:56 +0000 UTC 2020-01-23 12:59:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 23 12:59:57.980: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8186,SelfLink:/apis/apps/v1/namespaces/deployment-8186/replicasets/test-cleanup-deployment-55bbcbc84c,UID:740f6c72-4b5d-4956-a67e-1069c97ce37b,ResourceVersion:21555513,Generation:1,CreationTimestamp:2020-01-23 12:59:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment fd65d172-910f-418b-908f-b81325747ea9 0xc0020eceb7 0xc0020eceb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 23 12:59:57.987: INFO: Pod "test-cleanup-deployment-55bbcbc84c-hk4js" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-hk4js,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8186,SelfLink:/api/v1/namespaces/deployment-8186/pods/test-cleanup-deployment-55bbcbc84c-hk4js,UID:ccd32a43-84ce-40c5-aa10-778fbbda8bd7,ResourceVersion:21555512,Generation:0,CreationTimestamp:2020-01-23 12:59:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 740f6c72-4b5d-4956-a67e-1069c97ce37b 0xc0020ed4c7 0xc0020ed4c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fdsm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fdsm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-5fdsm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020ed540} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020ed560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:59:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:59:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:59:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 12:59:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-23 12:59:50 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-23 12:59:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://13542ad304b86e494f1fea249703548b60e2453c56892a15bbcc93a1c61b0f05}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 12:59:57.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8186" for this suite. Jan 23 13:00:04.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:00:04.145: INFO: namespace deployment-8186 deletion completed in 6.13680298s • [SLOW TEST:23.440 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:00:04.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 23 13:00:04.551: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"18dbf0c2-ee70-43fa-8684-4b1ae9a80cf8", Controller:(*bool)(0xc00299f1b2), BlockOwnerDeletion:(*bool)(0xc00299f1b3)}} Jan 23 13:00:04.714: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"51c5994e-f5b8-43ed-834d-e9de2ee79567", Controller:(*bool)(0xc00298e4b2), BlockOwnerDeletion:(*bool)(0xc00298e4b3)}} Jan 23 13:00:04.729: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f15d7881-8a5d-4204-80cb-34f92c01aa66", Controller:(*bool)(0xc00299f37a), BlockOwnerDeletion:(*bool)(0xc00299f37b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:00:09.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-693" for this suite. Jan 23 13:00:15.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:00:15.972: INFO: namespace gc-693 deletion completed in 6.179114938s • [SLOW TEST:11.827 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:00:15.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 23 13:00:16.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67fc4187-9608-44ff-8dae-7d54ffc23476" in namespace "downward-api-1737" to be "success or failure" Jan 23 13:00:16.263: INFO: Pod "downwardapi-volume-67fc4187-9608-44ff-8dae-7d54ffc23476": Phase="Pending", Reason="", readiness=false. Elapsed: 92.126245ms Jan 23 13:00:18.273: INFO: Pod "downwardapi-volume-67fc4187-9608-44ff-8dae-7d54ffc23476": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101933878s Jan 23 13:00:20.283: INFO: Pod "downwardapi-volume-67fc4187-9608-44ff-8dae-7d54ffc23476": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111839103s Jan 23 13:00:22.301: INFO: Pod "downwardapi-volume-67fc4187-9608-44ff-8dae-7d54ffc23476": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130306455s Jan 23 13:00:24.333: INFO: Pod "downwardapi-volume-67fc4187-9608-44ff-8dae-7d54ffc23476": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162614284s Jan 23 13:00:26.355: INFO: Pod "downwardapi-volume-67fc4187-9608-44ff-8dae-7d54ffc23476": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.184096666s STEP: Saw pod success Jan 23 13:00:26.355: INFO: Pod "downwardapi-volume-67fc4187-9608-44ff-8dae-7d54ffc23476" satisfied condition "success or failure" Jan 23 13:00:26.364: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-67fc4187-9608-44ff-8dae-7d54ffc23476 container client-container: STEP: delete the pod Jan 23 13:00:26.692: INFO: Waiting for pod downwardapi-volume-67fc4187-9608-44ff-8dae-7d54ffc23476 to disappear Jan 23 13:00:26.703: INFO: Pod downwardapi-volume-67fc4187-9608-44ff-8dae-7d54ffc23476 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:00:26.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1737" for this suite. Jan 23 13:00:32.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:00:33.006: INFO: namespace downward-api-1737 deletion completed in 6.162725529s • [SLOW TEST:17.034 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:00:33.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 23 13:00:33.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9101' Jan 23 13:00:33.261: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 23 13:00:33.261: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 23 13:00:35.319: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-mttc9] Jan 23 13:00:35.319: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-mttc9" in namespace "kubectl-9101" to be "running and ready" Jan 23 13:00:35.322: INFO: Pod "e2e-test-nginx-rc-mttc9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.477843ms Jan 23 13:00:37.329: INFO: Pod "e2e-test-nginx-rc-mttc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009509002s Jan 23 13:00:39.336: INFO: Pod "e2e-test-nginx-rc-mttc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017219983s Jan 23 13:00:41.346: INFO: Pod "e2e-test-nginx-rc-mttc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027058638s Jan 23 13:00:43.352: INFO: Pod "e2e-test-nginx-rc-mttc9": Phase="Running", Reason="", readiness=true. Elapsed: 8.032694729s Jan 23 13:00:43.352: INFO: Pod "e2e-test-nginx-rc-mttc9" satisfied condition "running and ready" Jan 23 13:00:43.352: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-mttc9] Jan 23 13:00:43.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-9101' Jan 23 13:00:43.502: INFO: stderr: "" Jan 23 13:00:43.502: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jan 23 13:00:43.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9101' Jan 23 13:00:43.613: INFO: stderr: "" Jan 23 13:00:43.613: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:00:43.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9101" for this suite. Jan 23 13:01:05.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:01:05.745: INFO: namespace kubectl-9101 deletion completed in 22.122007133s • [SLOW TEST:32.738 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:01:05.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 23 13:01:05.874: INFO: Waiting up to 5m0s for pod "downward-api-de747d8a-6ea2-44dd-b580-c195e2ee6335" in namespace "downward-api-640" to be "success or failure" Jan 23 13:01:05.908: INFO: Pod "downward-api-de747d8a-6ea2-44dd-b580-c195e2ee6335": Phase="Pending", Reason="", readiness=false. Elapsed: 33.46497ms Jan 23 13:01:07.919: INFO: Pod "downward-api-de747d8a-6ea2-44dd-b580-c195e2ee6335": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045257051s Jan 23 13:01:09.931: INFO: Pod "downward-api-de747d8a-6ea2-44dd-b580-c195e2ee6335": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056551292s Jan 23 13:01:11.940: INFO: Pod "downward-api-de747d8a-6ea2-44dd-b580-c195e2ee6335": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065652024s Jan 23 13:01:13.946: INFO: Pod "downward-api-de747d8a-6ea2-44dd-b580-c195e2ee6335": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071682493s Jan 23 13:01:15.958: INFO: Pod "downward-api-de747d8a-6ea2-44dd-b580-c195e2ee6335": Phase="Pending", Reason="", readiness=false. Elapsed: 10.08332107s Jan 23 13:01:17.968: INFO: Pod "downward-api-de747d8a-6ea2-44dd-b580-c195e2ee6335": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.093912193s STEP: Saw pod success Jan 23 13:01:17.968: INFO: Pod "downward-api-de747d8a-6ea2-44dd-b580-c195e2ee6335" satisfied condition "success or failure" Jan 23 13:01:17.973: INFO: Trying to get logs from node iruya-node pod downward-api-de747d8a-6ea2-44dd-b580-c195e2ee6335 container dapi-container: STEP: delete the pod Jan 23 13:01:18.067: INFO: Waiting for pod downward-api-de747d8a-6ea2-44dd-b580-c195e2ee6335 to disappear Jan 23 13:01:18.077: INFO: Pod downward-api-de747d8a-6ea2-44dd-b580-c195e2ee6335 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:01:18.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-640" for this suite. Jan 23 13:01:24.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:01:24.329: INFO: namespace downward-api-640 deletion completed in 6.245163353s • [SLOW TEST:18.583 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:01:24.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-d9f91567-787c-4261-8d36-58136b8d9b74 STEP: Creating configMap with name cm-test-opt-upd-99af9a6c-6e81-4b89-8338-f8b27281be1c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d9f91567-787c-4261-8d36-58136b8d9b74 STEP: Updating configmap cm-test-opt-upd-99af9a6c-6e81-4b89-8338-f8b27281be1c STEP: Creating configMap with name cm-test-opt-create-37ba5f08-3f40-485b-b2ff-a1f1e93acb6a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:01:40.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9871" for this suite. Jan 23 13:02:02.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:02:03.021: INFO: namespace projected-9871 deletion completed in 22.1846725s • [SLOW TEST:38.690 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:02:03.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 23 13:02:03.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8193' Jan 23 13:02:03.522: INFO: stderr: "" Jan 23 13:02:03.523: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 23 13:02:03.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8193' Jan 23 13:02:03.621: INFO: stderr: "" Jan 23 13:02:03.621: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Jan 23 13:02:08.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8193' Jan 23 13:02:08.739: INFO: stderr: "" Jan 23 13:02:08.739: INFO: stdout: "update-demo-nautilus-k8jrw update-demo-nautilus-tpmg4 " Jan 23 13:02:08.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k8jrw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8193' Jan 23 13:02:08.832: INFO: stderr: "" Jan 23 13:02:08.832: INFO: stdout: "" Jan 23 13:02:08.833: INFO: update-demo-nautilus-k8jrw is created but not running Jan 23 13:02:13.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8193' Jan 23 13:02:13.999: INFO: stderr: "" Jan 23 13:02:14.000: INFO: stdout: "update-demo-nautilus-k8jrw update-demo-nautilus-tpmg4 " Jan 23 13:02:14.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k8jrw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8193' Jan 23 13:02:14.104: INFO: stderr: "" Jan 23 13:02:14.105: INFO: stdout: "true" Jan 23 13:02:14.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k8jrw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8193' Jan 23 13:02:14.204: INFO: stderr: "" Jan 23 13:02:14.204: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 13:02:14.204: INFO: validating pod update-demo-nautilus-k8jrw Jan 23 13:02:14.282: INFO: got data: { "image": "nautilus.jpg" } Jan 23 13:02:14.282: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 13:02:14.282: INFO: update-demo-nautilus-k8jrw is verified up and running Jan 23 13:02:14.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpmg4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8193' Jan 23 13:02:14.372: INFO: stderr: "" Jan 23 13:02:14.372: INFO: stdout: "true" Jan 23 13:02:14.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpmg4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8193' Jan 23 13:02:14.451: INFO: stderr: "" Jan 23 13:02:14.451: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 23 13:02:14.451: INFO: validating pod update-demo-nautilus-tpmg4 Jan 23 13:02:14.463: INFO: got data: { "image": "nautilus.jpg" } Jan 23 13:02:14.463: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 23 13:02:14.463: INFO: update-demo-nautilus-tpmg4 is verified up and running STEP: using delete to clean up resources Jan 23 13:02:14.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8193' Jan 23 13:02:14.566: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 13:02:14.566: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 23 13:02:14.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8193' Jan 23 13:02:14.674: INFO: stderr: "No resources found.\n" Jan 23 13:02:14.674: INFO: stdout: "" Jan 23 13:02:14.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8193 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 23 13:02:14.750: INFO: stderr: "" Jan 23 13:02:14.751: INFO: stdout: "update-demo-nautilus-k8jrw\nupdate-demo-nautilus-tpmg4\n" Jan 23 13:02:15.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8193' Jan 23 13:02:15.382: INFO: stderr: "No resources found.\n" Jan 23 13:02:15.382: INFO: stdout: "" Jan 23 13:02:15.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8193 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 23 13:02:15.462: INFO: stderr: "" Jan 23 13:02:15.462: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:02:15.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8193" for this suite. Jan 23 13:02:37.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:02:37.599: INFO: namespace kubectl-8193 deletion completed in 22.128722426s • [SLOW TEST:34.578 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:02:37.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-1832000c-f228-4f47-a04e-8a1b33233625 STEP: Creating a pod to test consume configMaps Jan 23 13:02:37.712: INFO: Waiting up to 5m0s for pod "pod-configmaps-7894a2b0-1ef8-4734-a5e3-d98e249443bf" in namespace "configmap-2139" to be "success or failure" Jan 23 13:02:37.733: INFO: Pod "pod-configmaps-7894a2b0-1ef8-4734-a5e3-d98e249443bf": Phase="Pending", Reason="", readiness=false. Elapsed: 21.23883ms Jan 23 13:02:39.744: INFO: Pod "pod-configmaps-7894a2b0-1ef8-4734-a5e3-d98e249443bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031768973s Jan 23 13:02:41.757: INFO: Pod "pod-configmaps-7894a2b0-1ef8-4734-a5e3-d98e249443bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045277625s Jan 23 13:02:43.767: INFO: Pod "pod-configmaps-7894a2b0-1ef8-4734-a5e3-d98e249443bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054986991s Jan 23 13:02:45.776: INFO: Pod "pod-configmaps-7894a2b0-1ef8-4734-a5e3-d98e249443bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063672979s Jan 23 13:02:47.796: INFO: Pod "pod-configmaps-7894a2b0-1ef8-4734-a5e3-d98e249443bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084050707s STEP: Saw pod success Jan 23 13:02:47.796: INFO: Pod "pod-configmaps-7894a2b0-1ef8-4734-a5e3-d98e249443bf" satisfied condition "success or failure" Jan 23 13:02:47.801: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7894a2b0-1ef8-4734-a5e3-d98e249443bf container configmap-volume-test: STEP: delete the pod Jan 23 13:02:47.880: INFO: Waiting for pod pod-configmaps-7894a2b0-1ef8-4734-a5e3-d98e249443bf to disappear Jan 23 13:02:47.885: INFO: Pod pod-configmaps-7894a2b0-1ef8-4734-a5e3-d98e249443bf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:02:47.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2139" for this suite. Jan 23 13:02:53.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:02:54.033: INFO: namespace configmap-2139 deletion completed in 6.139640434s • [SLOW TEST:16.433 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:02:54.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-595c STEP: Creating a pod to test atomic-volume-subpath Jan 23 13:02:54.183: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-595c" in namespace "subpath-331" to be "success or failure" Jan 23 13:02:54.198: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.681724ms Jan 23 13:02:56.207: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024338487s Jan 23 13:02:58.214: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030808759s Jan 23 13:03:00.231: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048323532s Jan 23 13:03:02.249: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0660726s Jan 23 13:03:04.262: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Running", Reason="", readiness=true. Elapsed: 10.07959472s Jan 23 13:03:06.274: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Running", Reason="", readiness=true. Elapsed: 12.091510175s Jan 23 13:03:08.288: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Running", Reason="", readiness=true. Elapsed: 14.104664703s Jan 23 13:03:10.296: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Running", Reason="", readiness=true. Elapsed: 16.113511962s Jan 23 13:03:12.312: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Running", Reason="", readiness=true. Elapsed: 18.12895397s Jan 23 13:03:14.321: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Running", Reason="", readiness=true. Elapsed: 20.138312031s Jan 23 13:03:16.328: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Running", Reason="", readiness=true. Elapsed: 22.145377732s Jan 23 13:03:18.338: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Running", Reason="", readiness=true. Elapsed: 24.155572017s Jan 23 13:03:20.437: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Running", Reason="", readiness=true. Elapsed: 26.254551138s Jan 23 13:03:22.448: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Running", Reason="", readiness=true. Elapsed: 28.265511464s Jan 23 13:03:24.458: INFO: Pod "pod-subpath-test-configmap-595c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.275047226s STEP: Saw pod success Jan 23 13:03:24.458: INFO: Pod "pod-subpath-test-configmap-595c" satisfied condition "success or failure" Jan 23 13:03:24.463: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-595c container test-container-subpath-configmap-595c: STEP: delete the pod Jan 23 13:03:24.519: INFO: Waiting for pod pod-subpath-test-configmap-595c to disappear Jan 23 13:03:24.523: INFO: Pod pod-subpath-test-configmap-595c no longer exists STEP: Deleting pod pod-subpath-test-configmap-595c Jan 23 13:03:24.524: INFO: Deleting pod "pod-subpath-test-configmap-595c" in namespace "subpath-331" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:03:24.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-331" for this suite. Jan 23 13:03:30.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:03:30.738: INFO: namespace subpath-331 deletion completed in 6.207073687s • [SLOW TEST:36.705 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:03:30.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 23 13:03:40.968: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 23 13:04:01.114: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:04:01.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1768" for this suite. Jan 23 13:04:07.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:04:07.282: INFO: namespace pods-1768 deletion completed in 6.151286655s • [SLOW TEST:36.544 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:04:07.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 23 13:04:18.101: INFO: Successfully updated pod "annotationupdate530e62a4-b7a2-40c5-bb5f-4229c742aeab" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:04:20.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8102" for this suite. Jan 23 13:04:42.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:04:42.376: INFO: namespace downward-api-8102 deletion completed in 22.178116825s • [SLOW TEST:35.093 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:04:42.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jan 23 13:04:42.546: INFO: Waiting up to 5m0s for pod "client-containers-19ce902d-30c9-47a8-9288-7a2b025ba492" in namespace "containers-5989" to be "success or failure" Jan 23 13:04:42.555: INFO: Pod "client-containers-19ce902d-30c9-47a8-9288-7a2b025ba492": Phase="Pending", Reason="", readiness=false. Elapsed: 9.144512ms Jan 23 13:04:44.569: INFO: Pod "client-containers-19ce902d-30c9-47a8-9288-7a2b025ba492": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022769196s Jan 23 13:04:46.587: INFO: Pod "client-containers-19ce902d-30c9-47a8-9288-7a2b025ba492": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0405257s Jan 23 13:04:48.613: INFO: Pod "client-containers-19ce902d-30c9-47a8-9288-7a2b025ba492": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066533027s Jan 23 13:04:50.635: INFO: Pod "client-containers-19ce902d-30c9-47a8-9288-7a2b025ba492": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088454993s STEP: Saw pod success Jan 23 13:04:50.635: INFO: Pod "client-containers-19ce902d-30c9-47a8-9288-7a2b025ba492" satisfied condition "success or failure" Jan 23 13:04:50.655: INFO: Trying to get logs from node iruya-node pod client-containers-19ce902d-30c9-47a8-9288-7a2b025ba492 container test-container: STEP: delete the pod Jan 23 13:04:50.791: INFO: Waiting for pod client-containers-19ce902d-30c9-47a8-9288-7a2b025ba492 to disappear Jan 23 13:04:50.796: INFO: Pod client-containers-19ce902d-30c9-47a8-9288-7a2b025ba492 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:04:50.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5989" for this suite. Jan 23 13:04:56.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:04:56.987: INFO: namespace containers-5989 deletion completed in 6.18597319s • [SLOW TEST:14.612 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:04:56.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-bb62283f-ae7f-47c5-98cd-d6d8ad9172a4 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:04:57.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1577" for this suite. Jan 23 13:05:03.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:05:03.286: INFO: namespace secrets-1577 deletion completed in 6.201644159s • [SLOW TEST:6.297 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:05:03.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-730d50af-1d5b-46fa-992c-dbbcc7d4fe82 STEP: Creating secret with name s-test-opt-upd-2ff55ca7-3c0b-4c86-9255-b0b9ec433087 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-730d50af-1d5b-46fa-992c-dbbcc7d4fe82 STEP: Updating secret s-test-opt-upd-2ff55ca7-3c0b-4c86-9255-b0b9ec433087 STEP: Creating secret with name s-test-opt-create-12b45642-337b-4b62-ac6b-a30c90d9694e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:05:17.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4544" for this suite. Jan 23 13:05:39.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:05:39.831: INFO: namespace secrets-4544 deletion completed in 22.128960575s • [SLOW TEST:36.544 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:05:39.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 23 13:05:39.891: INFO: Creating ReplicaSet my-hostname-basic-990cc1ff-95c8-4ee6-aa7b-5326a1f657f9 Jan 23 13:05:39.909: INFO: Pod name my-hostname-basic-990cc1ff-95c8-4ee6-aa7b-5326a1f657f9: Found 0 pods out of 1 Jan 23 13:05:44.915: INFO: Pod name my-hostname-basic-990cc1ff-95c8-4ee6-aa7b-5326a1f657f9: Found 1 pods out of 1 Jan 23 13:05:44.915: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-990cc1ff-95c8-4ee6-aa7b-5326a1f657f9" is running Jan 23 13:05:50.939: INFO: Pod "my-hostname-basic-990cc1ff-95c8-4ee6-aa7b-5326a1f657f9-27472" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 13:05:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 13:05:40 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-990cc1ff-95c8-4ee6-aa7b-5326a1f657f9]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 13:05:40 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-990cc1ff-95c8-4ee6-aa7b-5326a1f657f9]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 13:05:39 +0000 UTC Reason: Message:}]) Jan 23 13:05:50.939: INFO: Trying to dial the pod Jan 23 13:05:55.997: INFO: Controller my-hostname-basic-990cc1ff-95c8-4ee6-aa7b-5326a1f657f9: Got expected result from replica 1 [my-hostname-basic-990cc1ff-95c8-4ee6-aa7b-5326a1f657f9-27472]: "my-hostname-basic-990cc1ff-95c8-4ee6-aa7b-5326a1f657f9-27472", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:05:55.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5306" for this suite. Jan 23 13:06:02.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:06:02.215: INFO: namespace replicaset-5306 deletion completed in 6.206325773s • [SLOW TEST:22.384 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:06:02.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2c22c029-6a20-4a7f-b4c0-88a6ea42034b STEP: Creating a pod to test consume secrets Jan 23 13:06:02.777: INFO: Waiting up to 5m0s for pod "pod-secrets-1c854064-1e08-4799-a5ec-d82ae24d716a" in namespace "secrets-3426" to be "success or failure" Jan 23 13:06:02.787: INFO: Pod "pod-secrets-1c854064-1e08-4799-a5ec-d82ae24d716a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.132125ms Jan 23 13:06:04.798: INFO: Pod "pod-secrets-1c854064-1e08-4799-a5ec-d82ae24d716a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020010965s Jan 23 13:06:06.816: INFO: Pod "pod-secrets-1c854064-1e08-4799-a5ec-d82ae24d716a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037676733s Jan 23 13:06:08.836: INFO: Pod "pod-secrets-1c854064-1e08-4799-a5ec-d82ae24d716a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058069879s Jan 23 13:06:10.844: INFO: Pod "pod-secrets-1c854064-1e08-4799-a5ec-d82ae24d716a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066461674s Jan 23 13:06:12.856: INFO: Pod "pod-secrets-1c854064-1e08-4799-a5ec-d82ae24d716a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078182426s Jan 23 13:06:14.873: INFO: Pod "pod-secrets-1c854064-1e08-4799-a5ec-d82ae24d716a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.09530789s STEP: Saw pod success Jan 23 13:06:14.873: INFO: Pod "pod-secrets-1c854064-1e08-4799-a5ec-d82ae24d716a" satisfied condition "success or failure" Jan 23 13:06:14.879: INFO: Trying to get logs from node iruya-node pod pod-secrets-1c854064-1e08-4799-a5ec-d82ae24d716a container secret-volume-test: STEP: delete the pod Jan 23 13:06:14.929: INFO: Waiting for pod pod-secrets-1c854064-1e08-4799-a5ec-d82ae24d716a to disappear Jan 23 13:06:14.933: INFO: Pod pod-secrets-1c854064-1e08-4799-a5ec-d82ae24d716a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:06:14.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3426" for this suite. Jan 23 13:06:21.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:06:21.172: INFO: namespace secrets-3426 deletion completed in 6.233727444s STEP: Destroying namespace "secret-namespace-1918" for this suite. Jan 23 13:06:27.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:06:27.348: INFO: namespace secret-namespace-1918 deletion completed in 6.175602437s • [SLOW TEST:25.132 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:06:27.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-91dc32a4-3e51-408c-8ab1-736025541cc3 STEP: Creating a pod to test consume configMaps Jan 23 13:06:27.554: INFO: Waiting up to 5m0s for pod "pod-configmaps-c81aed8f-15ce-49f3-9092-f780486757a0" in namespace "configmap-1892" to be "success or failure" Jan 23 13:06:27.589: INFO: Pod "pod-configmaps-c81aed8f-15ce-49f3-9092-f780486757a0": Phase="Pending", Reason="", readiness=false. Elapsed: 35.255079ms Jan 23 13:06:29.600: INFO: Pod "pod-configmaps-c81aed8f-15ce-49f3-9092-f780486757a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046422769s Jan 23 13:06:31.616: INFO: Pod "pod-configmaps-c81aed8f-15ce-49f3-9092-f780486757a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062181754s Jan 23 13:06:33.636: INFO: Pod "pod-configmaps-c81aed8f-15ce-49f3-9092-f780486757a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082264969s Jan 23 13:06:35.646: INFO: Pod "pod-configmaps-c81aed8f-15ce-49f3-9092-f780486757a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092620937s Jan 23 13:06:37.872: INFO: Pod "pod-configmaps-c81aed8f-15ce-49f3-9092-f780486757a0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.318612555s Jan 23 13:06:39.886: INFO: Pod "pod-configmaps-c81aed8f-15ce-49f3-9092-f780486757a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.332545566s STEP: Saw pod success Jan 23 13:06:39.887: INFO: Pod "pod-configmaps-c81aed8f-15ce-49f3-9092-f780486757a0" satisfied condition "success or failure" Jan 23 13:06:39.911: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c81aed8f-15ce-49f3-9092-f780486757a0 container configmap-volume-test: STEP: delete the pod Jan 23 13:06:40.051: INFO: Waiting for pod pod-configmaps-c81aed8f-15ce-49f3-9092-f780486757a0 to disappear Jan 23 13:06:40.061: INFO: Pod pod-configmaps-c81aed8f-15ce-49f3-9092-f780486757a0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:06:40.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1892" for this suite. Jan 23 13:06:46.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:06:46.242: INFO: namespace configmap-1892 deletion completed in 6.172045499s • [SLOW TEST:18.894 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:06:46.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-n45n STEP: Creating a pod to test atomic-volume-subpath Jan 23 13:06:46.497: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-n45n" in namespace "subpath-7452" to be "success or failure" Jan 23 13:06:46.505: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Pending", Reason="", readiness=false. Elapsed: 7.790014ms Jan 23 13:06:48.525: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027873084s Jan 23 13:06:50.541: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044013564s Jan 23 13:06:52.553: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055545462s Jan 23 13:06:54.569: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071285427s Jan 23 13:06:56.585: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Running", Reason="", readiness=true. Elapsed: 10.087364768s Jan 23 13:06:58.607: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Running", Reason="", readiness=true. Elapsed: 12.10979188s Jan 23 13:07:00.622: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Running", Reason="", readiness=true. Elapsed: 14.124229316s Jan 23 13:07:02.630: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Running", Reason="", readiness=true. Elapsed: 16.132339762s Jan 23 13:07:04.645: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Running", Reason="", readiness=true. Elapsed: 18.147496531s Jan 23 13:07:06.657: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Running", Reason="", readiness=true. Elapsed: 20.159485108s Jan 23 13:07:08.670: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Running", Reason="", readiness=true. Elapsed: 22.172669703s Jan 23 13:07:10.679: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Running", Reason="", readiness=true. Elapsed: 24.181694117s Jan 23 13:07:12.689: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Running", Reason="", readiness=true. Elapsed: 26.191775284s Jan 23 13:07:14.705: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Running", Reason="", readiness=true. Elapsed: 28.207500522s Jan 23 13:07:16.712: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Running", Reason="", readiness=true. Elapsed: 30.214457347s Jan 23 13:07:18.720: INFO: Pod "pod-subpath-test-secret-n45n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.223137998s STEP: Saw pod success Jan 23 13:07:18.721: INFO: Pod "pod-subpath-test-secret-n45n" satisfied condition "success or failure" Jan 23 13:07:18.726: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-n45n container test-container-subpath-secret-n45n: STEP: delete the pod Jan 23 13:07:18.795: INFO: Waiting for pod pod-subpath-test-secret-n45n to disappear Jan 23 13:07:18.799: INFO: Pod pod-subpath-test-secret-n45n no longer exists STEP: Deleting pod pod-subpath-test-secret-n45n Jan 23 13:07:18.799: INFO: Deleting pod "pod-subpath-test-secret-n45n" in namespace "subpath-7452" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:07:18.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7452" for this suite. Jan 23 13:07:26.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:07:26.956: INFO: namespace subpath-7452 deletion completed in 8.149095303s • [SLOW TEST:40.712 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:07:26.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:08:25.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9841" for this suite. Jan 23 13:08:31.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:08:32.117: INFO: namespace container-runtime-9841 deletion completed in 6.183169822s • [SLOW TEST:65.160 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:08:32.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-84a405d1-7a6c-48be-8db1-f5a5ba4a6c53 in namespace container-probe-8498 Jan 23 13:08:42.481: INFO: Started pod busybox-84a405d1-7a6c-48be-8db1-f5a5ba4a6c53 in namespace container-probe-8498 STEP: checking the pod's current state and verifying that restartCount is present Jan 23 13:08:42.488: INFO: Initial restart count of pod busybox-84a405d1-7a6c-48be-8db1-f5a5ba4a6c53 is 0 Jan 23 13:09:39.173: INFO: Restart count of pod container-probe-8498/busybox-84a405d1-7a6c-48be-8db1-f5a5ba4a6c53 is now 1 (56.684796274s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:09:39.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8498" for this suite. Jan 23 13:09:45.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:09:45.481: INFO: namespace container-probe-8498 deletion completed in 6.207276739s • [SLOW TEST:73.363 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:09:45.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 23 13:09:45.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8abc28af-3a33-4abd-b5d5-d4c55bc2230d" in namespace "projected-6665" to be "success or failure" Jan 23 13:09:45.614: INFO: Pod "downwardapi-volume-8abc28af-3a33-4abd-b5d5-d4c55bc2230d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.433755ms Jan 23 13:09:47.622: INFO: Pod "downwardapi-volume-8abc28af-3a33-4abd-b5d5-d4c55bc2230d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017846307s Jan 23 13:09:49.631: INFO: Pod "downwardapi-volume-8abc28af-3a33-4abd-b5d5-d4c55bc2230d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026777359s Jan 23 13:09:51.644: INFO: Pod "downwardapi-volume-8abc28af-3a33-4abd-b5d5-d4c55bc2230d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039548111s Jan 23 13:09:53.677: INFO: Pod "downwardapi-volume-8abc28af-3a33-4abd-b5d5-d4c55bc2230d": Phase="Running", Reason="", readiness=true. Elapsed: 8.072535151s Jan 23 13:09:55.689: INFO: Pod "downwardapi-volume-8abc28af-3a33-4abd-b5d5-d4c55bc2230d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084789106s STEP: Saw pod success Jan 23 13:09:55.689: INFO: Pod "downwardapi-volume-8abc28af-3a33-4abd-b5d5-d4c55bc2230d" satisfied condition "success or failure" Jan 23 13:09:55.697: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8abc28af-3a33-4abd-b5d5-d4c55bc2230d container client-container: STEP: delete the pod Jan 23 13:09:55.895: INFO: Waiting for pod downwardapi-volume-8abc28af-3a33-4abd-b5d5-d4c55bc2230d to disappear Jan 23 13:09:55.909: INFO: Pod downwardapi-volume-8abc28af-3a33-4abd-b5d5-d4c55bc2230d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:09:55.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6665" for this suite. Jan 23 13:10:01.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:10:02.191: INFO: namespace projected-6665 deletion completed in 6.271131878s • [SLOW TEST:16.711 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:10:02.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-c58e7cb7-ce61-4786-9784-60c0a69dcbc1 STEP: Creating a pod to test consume configMaps Jan 23 13:10:02.372: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ba97a9d8-0637-4571-b6ba-c417a4a255ee" in namespace "projected-5414" to be "success or failure" Jan 23 13:10:02.377: INFO: Pod "pod-projected-configmaps-ba97a9d8-0637-4571-b6ba-c417a4a255ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131299ms Jan 23 13:10:04.387: INFO: Pod "pod-projected-configmaps-ba97a9d8-0637-4571-b6ba-c417a4a255ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014300667s Jan 23 13:10:06.405: INFO: Pod "pod-projected-configmaps-ba97a9d8-0637-4571-b6ba-c417a4a255ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032476297s Jan 23 13:10:08.424: INFO: Pod "pod-projected-configmaps-ba97a9d8-0637-4571-b6ba-c417a4a255ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051149662s Jan 23 13:10:10.434: INFO: Pod "pod-projected-configmaps-ba97a9d8-0637-4571-b6ba-c417a4a255ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061945715s Jan 23 13:10:12.847: INFO: Pod "pod-projected-configmaps-ba97a9d8-0637-4571-b6ba-c417a4a255ee": Phase="Pending", Reason="", readiness=false. Elapsed: 10.47498594s Jan 23 13:10:14.866: INFO: Pod "pod-projected-configmaps-ba97a9d8-0637-4571-b6ba-c417a4a255ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.49384142s STEP: Saw pod success Jan 23 13:10:14.867: INFO: Pod "pod-projected-configmaps-ba97a9d8-0637-4571-b6ba-c417a4a255ee" satisfied condition "success or failure" Jan 23 13:10:14.874: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ba97a9d8-0637-4571-b6ba-c417a4a255ee container projected-configmap-volume-test: STEP: delete the pod Jan 23 13:10:15.000: INFO: Waiting for pod pod-projected-configmaps-ba97a9d8-0637-4571-b6ba-c417a4a255ee to disappear Jan 23 13:10:15.017: INFO: Pod pod-projected-configmaps-ba97a9d8-0637-4571-b6ba-c417a4a255ee no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:10:15.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5414" for this suite. Jan 23 13:10:21.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:10:21.211: INFO: namespace projected-5414 deletion completed in 6.182313115s • [SLOW TEST:19.020 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:10:21.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 23 13:10:21.339: INFO: Waiting up to 5m0s for pod "downwardapi-volume-303e7130-e133-48b8-830c-73c2c57ded2a" in namespace "downward-api-1121" to be "success or failure" Jan 23 13:10:21.349: INFO: Pod "downwardapi-volume-303e7130-e133-48b8-830c-73c2c57ded2a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.079824ms Jan 23 13:10:23.355: INFO: Pod "downwardapi-volume-303e7130-e133-48b8-830c-73c2c57ded2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015744601s Jan 23 13:10:25.367: INFO: Pod "downwardapi-volume-303e7130-e133-48b8-830c-73c2c57ded2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027507687s Jan 23 13:10:27.381: INFO: Pod "downwardapi-volume-303e7130-e133-48b8-830c-73c2c57ded2a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041635493s Jan 23 13:10:29.394: INFO: Pod "downwardapi-volume-303e7130-e133-48b8-830c-73c2c57ded2a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054033888s Jan 23 13:10:31.407: INFO: Pod "downwardapi-volume-303e7130-e133-48b8-830c-73c2c57ded2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067572477s STEP: Saw pod success Jan 23 13:10:31.408: INFO: Pod "downwardapi-volume-303e7130-e133-48b8-830c-73c2c57ded2a" satisfied condition "success or failure" Jan 23 13:10:31.413: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-303e7130-e133-48b8-830c-73c2c57ded2a container client-container: STEP: delete the pod Jan 23 13:10:31.667: INFO: Waiting for pod downwardapi-volume-303e7130-e133-48b8-830c-73c2c57ded2a to disappear Jan 23 13:10:31.739: INFO: Pod downwardapi-volume-303e7130-e133-48b8-830c-73c2c57ded2a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:10:31.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1121" for this suite. Jan 23 13:10:37.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:10:37.974: INFO: namespace downward-api-1121 deletion completed in 6.156175551s • [SLOW TEST:16.762 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:10:37.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jan 23 13:10:38.746: INFO: created pod pod-service-account-defaultsa Jan 23 13:10:38.746: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 23 13:10:38.757: INFO: created pod pod-service-account-mountsa Jan 23 13:10:38.757: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 23 13:10:38.836: INFO: created pod pod-service-account-nomountsa Jan 23 13:10:38.837: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 23 13:10:38.873: INFO: created pod pod-service-account-defaultsa-mountspec Jan 23 13:10:38.874: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 23 13:10:39.026: INFO: created pod pod-service-account-mountsa-mountspec Jan 23 13:10:39.026: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 23 13:10:39.067: INFO: created pod pod-service-account-nomountsa-mountspec Jan 23 13:10:39.067: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 23 13:10:39.114: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 23 13:10:39.114: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 23 13:10:39.221: INFO: created pod pod-service-account-mountsa-nomountspec Jan 23 13:10:39.222: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 23 13:10:39.266: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 23 13:10:39.267: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:10:39.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1353" for this suite. Jan 23 13:11:26.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:11:26.183: INFO: namespace svcaccounts-1353 deletion completed in 46.659562165s • [SLOW TEST:48.209 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:11:26.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 23 13:11:26.384: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e546f3f-f730-4348-895b-87aa39bb61d3" in namespace "projected-419" to be "success or failure" Jan 23 13:11:26.429: INFO: Pod "downwardapi-volume-2e546f3f-f730-4348-895b-87aa39bb61d3": Phase="Pending", Reason="", readiness=false. Elapsed: 44.592706ms Jan 23 13:11:28.443: INFO: Pod "downwardapi-volume-2e546f3f-f730-4348-895b-87aa39bb61d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058491264s Jan 23 13:11:30.453: INFO: Pod "downwardapi-volume-2e546f3f-f730-4348-895b-87aa39bb61d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067964005s Jan 23 13:11:32.467: INFO: Pod "downwardapi-volume-2e546f3f-f730-4348-895b-87aa39bb61d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082584852s Jan 23 13:11:34.520: INFO: Pod "downwardapi-volume-2e546f3f-f730-4348-895b-87aa39bb61d3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134966338s Jan 23 13:11:36.535: INFO: Pod "downwardapi-volume-2e546f3f-f730-4348-895b-87aa39bb61d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150159538s STEP: Saw pod success Jan 23 13:11:36.535: INFO: Pod "downwardapi-volume-2e546f3f-f730-4348-895b-87aa39bb61d3" satisfied condition "success or failure" Jan 23 13:11:36.538: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2e546f3f-f730-4348-895b-87aa39bb61d3 container client-container: STEP: delete the pod Jan 23 13:11:36.656: INFO: Waiting for pod downwardapi-volume-2e546f3f-f730-4348-895b-87aa39bb61d3 to disappear Jan 23 13:11:36.662: INFO: Pod downwardapi-volume-2e546f3f-f730-4348-895b-87aa39bb61d3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:11:36.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-419" for this suite. Jan 23 13:11:42.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:11:42.874: INFO: namespace projected-419 deletion completed in 6.204543603s • [SLOW TEST:16.691 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:11:42.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 23 13:11:43.044: INFO: Waiting up to 5m0s for pod "pod-b8158b67-b5c3-4c0a-8ecc-3cd2c754de95" in namespace "emptydir-7893" to be "success or failure" Jan 23 13:11:43.080: INFO: Pod "pod-b8158b67-b5c3-4c0a-8ecc-3cd2c754de95": Phase="Pending", Reason="", readiness=false. Elapsed: 35.315431ms Jan 23 13:11:45.088: INFO: Pod "pod-b8158b67-b5c3-4c0a-8ecc-3cd2c754de95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043895433s Jan 23 13:11:47.100: INFO: Pod "pod-b8158b67-b5c3-4c0a-8ecc-3cd2c754de95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055539892s Jan 23 13:11:49.111: INFO: Pod "pod-b8158b67-b5c3-4c0a-8ecc-3cd2c754de95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067102983s Jan 23 13:11:51.134: INFO: Pod "pod-b8158b67-b5c3-4c0a-8ecc-3cd2c754de95": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089295861s Jan 23 13:11:53.142: INFO: Pod "pod-b8158b67-b5c3-4c0a-8ecc-3cd2c754de95": Phase="Pending", Reason="", readiness=false. Elapsed: 10.097375278s Jan 23 13:11:55.154: INFO: Pod "pod-b8158b67-b5c3-4c0a-8ecc-3cd2c754de95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.109559492s STEP: Saw pod success Jan 23 13:11:55.154: INFO: Pod "pod-b8158b67-b5c3-4c0a-8ecc-3cd2c754de95" satisfied condition "success or failure" Jan 23 13:11:55.158: INFO: Trying to get logs from node iruya-node pod pod-b8158b67-b5c3-4c0a-8ecc-3cd2c754de95 container test-container: STEP: delete the pod Jan 23 13:11:55.433: INFO: Waiting for pod pod-b8158b67-b5c3-4c0a-8ecc-3cd2c754de95 to disappear Jan 23 13:11:55.450: INFO: Pod pod-b8158b67-b5c3-4c0a-8ecc-3cd2c754de95 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:11:55.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7893" for this suite. Jan 23 13:12:01.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:12:01.654: INFO: namespace emptydir-7893 deletion completed in 6.195165662s • [SLOW TEST:18.779 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:12:01.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7vs77 in namespace proxy-3975 I0123 13:12:02.106303 8 runners.go:180] Created replication controller with name: proxy-service-7vs77, namespace: proxy-3975, replica count: 1 I0123 13:12:03.157545 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 13:12:04.158130 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 13:12:05.158857 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 13:12:06.159327 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 13:12:07.159765 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 13:12:08.160178 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 13:12:09.160541 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 13:12:10.160948 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 13:12:11.161317 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 13:12:12.162204 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 13:12:13.163241 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 13:12:14.164116 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 13:12:15.164566 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 13:12:16.165088 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 13:12:17.165526 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 13:12:18.165883 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0123 13:12:19.166383 8 runners.go:180] proxy-service-7vs77 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 23 13:12:19.173: INFO: setup took 17.315528552s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 23 13:12:19.261: INFO: (0) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 88.089531ms) Jan 23 13:12:19.262: INFO: (0) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 88.020189ms) Jan 23 13:12:19.262: INFO: (0) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 88.693695ms) Jan 23 13:12:19.262: INFO: (0) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 88.282948ms) Jan 23 13:12:19.262: INFO: (0) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 88.807509ms) Jan 23 13:12:19.262: INFO: (0) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 88.308074ms) Jan 23 13:12:19.262: INFO: (0) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 88.756192ms) Jan 23 13:12:19.263: INFO: (0) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname1/proxy/: foo (200; 89.436912ms) Jan 23 13:12:19.263: INFO: (0) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 89.947195ms) Jan 23 13:12:19.265: INFO: (0) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname1/proxy/: foo (200; 91.989598ms) Jan 23 13:12:19.267: INFO: (0) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname2/proxy/: bar (200; 93.363147ms) Jan 23 13:12:19.297: INFO: (0) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 124.194803ms) Jan 23 13:12:19.297: INFO: (0) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 124.097397ms) Jan 23 13:12:19.297: INFO: (0) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 124.169022ms) Jan 23 13:12:19.298: INFO: (0) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 124.127186ms) Jan 23 13:12:19.297: INFO: (0) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: ... (200; 53.881358ms) Jan 23 13:12:19.352: INFO: (1) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 54.158387ms) Jan 23 13:12:19.353: INFO: (1) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 53.981201ms) Jan 23 13:12:19.353: INFO: (1) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 54.372981ms) Jan 23 13:12:19.353: INFO: (1) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 54.781533ms) Jan 23 13:12:19.353: INFO: (1) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 54.348519ms) Jan 23 13:12:19.353: INFO: (1) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 54.566531ms) Jan 23 13:12:19.354: INFO: (1) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test (200; 21.376145ms) Jan 23 13:12:19.380: INFO: (2) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname2/proxy/: bar (200; 24.320312ms) Jan 23 13:12:19.380: INFO: (2) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname1/proxy/: foo (200; 24.452949ms) Jan 23 13:12:19.382: INFO: (2) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 27.097293ms) Jan 23 13:12:19.383: INFO: (2) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 27.23684ms) Jan 23 13:12:19.383: INFO: (2) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 27.337703ms) Jan 23 13:12:19.383: INFO: (2) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 28.309231ms) Jan 23 13:12:19.383: INFO: (2) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 28.015467ms) Jan 23 13:12:19.384: INFO: (2) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 28.612441ms) Jan 23 13:12:19.384: INFO: (2) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 29.053315ms) Jan 23 13:12:19.384: INFO: (2) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 28.53345ms) Jan 23 13:12:19.385: INFO: (2) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 29.164934ms) Jan 23 13:12:19.385: INFO: (2) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 29.564573ms) Jan 23 13:12:19.385: INFO: (2) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 30.33072ms) Jan 23 13:12:19.385: INFO: (2) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: ... (200; 9.58981ms) Jan 23 13:12:19.396: INFO: (3) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 10.68295ms) Jan 23 13:12:19.397: INFO: (3) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 11.066752ms) Jan 23 13:12:19.397: INFO: (3) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 11.391078ms) Jan 23 13:12:19.397: INFO: (3) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 11.37563ms) Jan 23 13:12:19.397: INFO: (3) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 11.259031ms) Jan 23 13:12:19.397: INFO: (3) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 11.692501ms) Jan 23 13:12:19.397: INFO: (3) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 11.306676ms) Jan 23 13:12:19.400: INFO: (3) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 14.542552ms) Jan 23 13:12:19.403: INFO: (3) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test (200; 6.570725ms) Jan 23 13:12:19.416: INFO: (4) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 9.493664ms) Jan 23 13:12:19.418: INFO: (4) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 10.861657ms) Jan 23 13:12:19.418: INFO: (4) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 11.336644ms) Jan 23 13:12:19.419: INFO: (4) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 11.702567ms) Jan 23 13:12:19.419: INFO: (4) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 11.765478ms) Jan 23 13:12:19.419: INFO: (4) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 11.91527ms) Jan 23 13:12:19.419: INFO: (4) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 12.531225ms) Jan 23 13:12:19.420: INFO: (4) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 12.685711ms) Jan 23 13:12:19.420: INFO: (4) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 12.838022ms) Jan 23 13:12:19.420: INFO: (4) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test<... (200; 5.361974ms) Jan 23 13:12:19.430: INFO: (5) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 5.667471ms) Jan 23 13:12:19.435: INFO: (5) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 11.13988ms) Jan 23 13:12:19.436: INFO: (5) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 11.751528ms) Jan 23 13:12:19.436: INFO: (5) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 11.790518ms) Jan 23 13:12:19.436: INFO: (5) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 11.934097ms) Jan 23 13:12:19.436: INFO: (5) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 11.969509ms) Jan 23 13:12:19.436: INFO: (5) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname1/proxy/: foo (200; 11.900161ms) Jan 23 13:12:19.437: INFO: (5) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test (200; 18.757796ms) Jan 23 13:12:19.458: INFO: (6) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 14.504322ms) Jan 23 13:12:19.458: INFO: (6) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 15.339101ms) Jan 23 13:12:19.459: INFO: (6) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 15.623076ms) Jan 23 13:12:19.459: INFO: (6) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 16.625706ms) Jan 23 13:12:19.460: INFO: (6) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 16.667492ms) Jan 23 13:12:19.460: INFO: (6) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 17.049766ms) Jan 23 13:12:19.461: INFO: (6) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 17.51563ms) Jan 23 13:12:19.461: INFO: (6) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 17.832746ms) Jan 23 13:12:19.461: INFO: (6) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 17.632219ms) Jan 23 13:12:19.461: INFO: (6) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: ... (200; 9.586819ms) Jan 23 13:12:19.480: INFO: (7) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 10.704603ms) Jan 23 13:12:19.480: INFO: (7) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 10.655196ms) Jan 23 13:12:19.480: INFO: (7) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 11.156029ms) Jan 23 13:12:19.481: INFO: (7) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 11.51794ms) Jan 23 13:12:19.481: INFO: (7) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 11.844226ms) Jan 23 13:12:19.482: INFO: (7) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 12.480305ms) Jan 23 13:12:19.482: INFO: (7) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test<... (200; 12.774194ms) Jan 23 13:12:19.484: INFO: (7) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 14.67278ms) Jan 23 13:12:19.484: INFO: (7) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 14.446621ms) Jan 23 13:12:19.484: INFO: (7) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 14.378386ms) Jan 23 13:12:19.484: INFO: (7) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname1/proxy/: foo (200; 15.196491ms) Jan 23 13:12:19.486: INFO: (7) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname2/proxy/: bar (200; 16.50378ms) Jan 23 13:12:19.496: INFO: (8) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 10.306978ms) Jan 23 13:12:19.498: INFO: (8) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname1/proxy/: foo (200; 11.916969ms) Jan 23 13:12:19.498: INFO: (8) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 11.585159ms) Jan 23 13:12:19.498: INFO: (8) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 12.040874ms) Jan 23 13:12:19.498: INFO: (8) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 11.702317ms) Jan 23 13:12:19.498: INFO: (8) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 11.762945ms) Jan 23 13:12:19.498: INFO: (8) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 11.839786ms) Jan 23 13:12:19.498: INFO: (8) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 11.877386ms) Jan 23 13:12:19.500: INFO: (8) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 14.215282ms) Jan 23 13:12:19.500: INFO: (8) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 13.987748ms) Jan 23 13:12:19.501: INFO: (8) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 15.107835ms) Jan 23 13:12:19.502: INFO: (8) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname1/proxy/: foo (200; 16.039068ms) Jan 23 13:12:19.502: INFO: (8) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 15.996555ms) Jan 23 13:12:19.502: INFO: (8) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname2/proxy/: bar (200; 15.920217ms) Jan 23 13:12:19.502: INFO: (8) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test (200; 8.7052ms) Jan 23 13:12:19.511: INFO: (9) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 8.537971ms) Jan 23 13:12:19.511: INFO: (9) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 8.664958ms) Jan 23 13:12:19.512: INFO: (9) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 9.294983ms) Jan 23 13:12:19.512: INFO: (9) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname2/proxy/: bar (200; 9.316505ms) Jan 23 13:12:19.512: INFO: (9) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 9.459957ms) Jan 23 13:12:19.512: INFO: (9) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: ... (200; 9.88846ms) Jan 23 13:12:19.528: INFO: (10) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 9.914935ms) Jan 23 13:12:19.528: INFO: (10) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 10.20362ms) Jan 23 13:12:19.530: INFO: (10) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 11.4751ms) Jan 23 13:12:19.530: INFO: (10) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 11.456347ms) Jan 23 13:12:19.530: INFO: (10) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 11.578143ms) Jan 23 13:12:19.530: INFO: (10) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test<... (200; 11.567264ms) Jan 23 13:12:19.530: INFO: (10) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 11.511249ms) Jan 23 13:12:19.530: INFO: (10) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 11.561357ms) Jan 23 13:12:19.530: INFO: (10) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 11.619826ms) Jan 23 13:12:19.532: INFO: (10) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 13.541125ms) Jan 23 13:12:19.532: INFO: (10) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 13.676739ms) Jan 23 13:12:19.539: INFO: (11) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 7.242842ms) Jan 23 13:12:19.539: INFO: (11) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test<... (200; 6.876376ms) Jan 23 13:12:19.540: INFO: (11) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 6.867629ms) Jan 23 13:12:19.543: INFO: (11) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 10.041385ms) Jan 23 13:12:19.544: INFO: (11) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 10.978608ms) Jan 23 13:12:19.544: INFO: (11) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname1/proxy/: foo (200; 11.142247ms) Jan 23 13:12:19.544: INFO: (11) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 11.038277ms) Jan 23 13:12:19.544: INFO: (11) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 11.36009ms) Jan 23 13:12:19.545: INFO: (11) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 12.454053ms) Jan 23 13:12:19.545: INFO: (11) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname2/proxy/: bar (200; 12.364508ms) Jan 23 13:12:19.546: INFO: (11) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 13.174234ms) Jan 23 13:12:19.546: INFO: (11) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname1/proxy/: foo (200; 12.802511ms) Jan 23 13:12:19.546: INFO: (11) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 12.768711ms) Jan 23 13:12:19.559: INFO: (12) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 12.885426ms) Jan 23 13:12:19.559: INFO: (12) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 12.945146ms) Jan 23 13:12:19.559: INFO: (12) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 13.11494ms) Jan 23 13:12:19.560: INFO: (12) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: ... (200; 14.869848ms) Jan 23 13:12:19.561: INFO: (12) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 15.2068ms) Jan 23 13:12:19.561: INFO: (12) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 15.385297ms) Jan 23 13:12:19.562: INFO: (12) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 15.506708ms) Jan 23 13:12:19.562: INFO: (12) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname2/proxy/: bar (200; 15.749944ms) Jan 23 13:12:19.562: INFO: (12) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 15.806514ms) Jan 23 13:12:19.562: INFO: (12) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 16.031503ms) Jan 23 13:12:19.562: INFO: (12) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname1/proxy/: foo (200; 16.048328ms) Jan 23 13:12:19.571: INFO: (13) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname1/proxy/: foo (200; 8.894809ms) Jan 23 13:12:19.571: INFO: (13) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 8.888145ms) Jan 23 13:12:19.571: INFO: (13) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 9.036346ms) Jan 23 13:12:19.572: INFO: (13) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 9.311911ms) Jan 23 13:12:19.572: INFO: (13) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 9.286202ms) Jan 23 13:12:19.573: INFO: (13) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 10.711454ms) Jan 23 13:12:19.573: INFO: (13) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 10.963179ms) Jan 23 13:12:19.574: INFO: (13) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 11.49402ms) Jan 23 13:12:19.574: INFO: (13) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 11.405953ms) Jan 23 13:12:19.575: INFO: (13) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 12.102049ms) Jan 23 13:12:19.575: INFO: (13) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: ... (200; 5.470029ms) Jan 23 13:12:19.588: INFO: (14) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 9.046942ms) Jan 23 13:12:19.588: INFO: (14) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 9.595818ms) Jan 23 13:12:19.589: INFO: (14) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 9.741645ms) Jan 23 13:12:19.589: INFO: (14) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname2/proxy/: bar (200; 9.970857ms) Jan 23 13:12:19.589: INFO: (14) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 9.891115ms) Jan 23 13:12:19.589: INFO: (14) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 10.006895ms) Jan 23 13:12:19.590: INFO: (14) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 10.765504ms) Jan 23 13:12:19.590: INFO: (14) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 10.800424ms) Jan 23 13:12:19.590: INFO: (14) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname1/proxy/: foo (200; 10.840239ms) Jan 23 13:12:19.590: INFO: (14) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 10.87066ms) Jan 23 13:12:19.590: INFO: (14) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname1/proxy/: foo (200; 10.827725ms) Jan 23 13:12:19.599: INFO: (15) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 8.678517ms) Jan 23 13:12:19.599: INFO: (15) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 9.389972ms) Jan 23 13:12:19.599: INFO: (15) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 9.379967ms) Jan 23 13:12:19.601: INFO: (15) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 11.334656ms) Jan 23 13:12:19.601: INFO: (15) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 11.321613ms) Jan 23 13:12:19.601: INFO: (15) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test<... (200; 11.384611ms) Jan 23 13:12:19.601: INFO: (15) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 11.315741ms) Jan 23 13:12:19.601: INFO: (15) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 11.377591ms) Jan 23 13:12:19.601: INFO: (15) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 11.337535ms) Jan 23 13:12:19.601: INFO: (15) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname1/proxy/: foo (200; 11.467279ms) Jan 23 13:12:19.602: INFO: (15) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname1/proxy/: foo (200; 11.760859ms) Jan 23 13:12:19.602: INFO: (15) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 11.966404ms) Jan 23 13:12:19.602: INFO: (15) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 11.932078ms) Jan 23 13:12:19.602: INFO: (15) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname2/proxy/: bar (200; 12.002496ms) Jan 23 13:12:19.602: INFO: (15) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 12.613861ms) Jan 23 13:12:19.612: INFO: (16) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 9.352795ms) Jan 23 13:12:19.612: INFO: (16) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 9.540415ms) Jan 23 13:12:19.613: INFO: (16) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 10.153535ms) Jan 23 13:12:19.613: INFO: (16) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 10.359016ms) Jan 23 13:12:19.614: INFO: (16) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 11.336562ms) Jan 23 13:12:19.614: INFO: (16) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 11.51274ms) Jan 23 13:12:19.615: INFO: (16) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 11.887718ms) Jan 23 13:12:19.615: INFO: (16) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test (200; 11.99895ms) Jan 23 13:12:19.615: INFO: (16) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 12.277903ms) Jan 23 13:12:19.615: INFO: (16) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 12.447157ms) Jan 23 13:12:19.616: INFO: (16) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname1/proxy/: foo (200; 12.815393ms) Jan 23 13:12:19.617: INFO: (16) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname1/proxy/: foo (200; 14.108136ms) Jan 23 13:12:19.618: INFO: (16) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 15.3765ms) Jan 23 13:12:19.621: INFO: (16) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 17.952535ms) Jan 23 13:12:19.635: INFO: (17) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:1080/proxy/: test<... (200; 14.493008ms) Jan 23 13:12:19.635: INFO: (17) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 14.338006ms) Jan 23 13:12:19.635: INFO: (17) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname2/proxy/: bar (200; 14.382976ms) Jan 23 13:12:19.636: INFO: (17) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 14.748404ms) Jan 23 13:12:19.636: INFO: (17) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 14.966025ms) Jan 23 13:12:19.636: INFO: (17) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 14.930629ms) Jan 23 13:12:19.636: INFO: (17) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 15.550562ms) Jan 23 13:12:19.636: INFO: (17) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 15.552432ms) Jan 23 13:12:19.636: INFO: (17) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 15.628323ms) Jan 23 13:12:19.636: INFO: (17) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname1/proxy/: foo (200; 15.60613ms) Jan 23 13:12:19.636: INFO: (17) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 15.625854ms) Jan 23 13:12:19.636: INFO: (17) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 15.795784ms) Jan 23 13:12:19.638: INFO: (17) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname1/proxy/: foo (200; 17.348351ms) Jan 23 13:12:19.638: INFO: (17) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 17.309743ms) Jan 23 13:12:19.638: INFO: (17) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 17.707687ms) Jan 23 13:12:19.639: INFO: (17) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test<... (200; 10.345393ms) Jan 23 13:12:19.649: INFO: (18) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 10.288166ms) Jan 23 13:12:19.649: INFO: (18) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test (200; 10.702002ms) Jan 23 13:12:19.650: INFO: (18) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 10.641362ms) Jan 23 13:12:19.651: INFO: (18) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:460/proxy/: tls baz (200; 12.180978ms) Jan 23 13:12:19.654: INFO: (18) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 15.731308ms) Jan 23 13:12:19.655: INFO: (18) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 16.692089ms) Jan 23 13:12:19.656: INFO: (18) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname2/proxy/: bar (200; 17.343265ms) Jan 23 13:12:19.656: INFO: (18) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname1/proxy/: foo (200; 17.420105ms) Jan 23 13:12:19.657: INFO: (18) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname1/proxy/: foo (200; 17.944479ms) Jan 23 13:12:19.657: INFO: (18) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 18.078358ms) Jan 23 13:12:19.666: INFO: (19) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m/proxy/: test (200; 8.469955ms) Jan 23 13:12:19.666: INFO: (19) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:443/proxy/: test<... (200; 9.231564ms) Jan 23 13:12:19.666: INFO: (19) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:1080/proxy/: ... (200; 9.322499ms) Jan 23 13:12:19.666: INFO: (19) /api/v1/namespaces/proxy-3975/pods/http:proxy-service-7vs77-bpc6m:160/proxy/: foo (200; 9.403553ms) Jan 23 13:12:19.671: INFO: (19) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname1/proxy/: foo (200; 13.6568ms) Jan 23 13:12:19.671: INFO: (19) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname1/proxy/: tls baz (200; 14.007725ms) Jan 23 13:12:19.671: INFO: (19) /api/v1/namespaces/proxy-3975/pods/https:proxy-service-7vs77-bpc6m:462/proxy/: tls qux (200; 14.054414ms) Jan 23 13:12:19.675: INFO: (19) /api/v1/namespaces/proxy-3975/services/https:proxy-service-7vs77:tlsportname2/proxy/: tls qux (200; 18.141644ms) Jan 23 13:12:19.675: INFO: (19) /api/v1/namespaces/proxy-3975/pods/proxy-service-7vs77-bpc6m:162/proxy/: bar (200; 18.279442ms) Jan 23 13:12:19.675: INFO: (19) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname2/proxy/: bar (200; 18.557457ms) Jan 23 13:12:19.675: INFO: (19) /api/v1/namespaces/proxy-3975/services/proxy-service-7vs77:portname1/proxy/: foo (200; 18.276606ms) Jan 23 13:12:19.677: INFO: (19) /api/v1/namespaces/proxy-3975/services/http:proxy-service-7vs77:portname2/proxy/: bar (200; 19.819959ms) STEP: deleting ReplicationController proxy-service-7vs77 in namespace proxy-3975, will wait for the garbage collector to delete the pods Jan 23 13:12:19.829: INFO: Deleting ReplicationController proxy-service-7vs77 took: 98.799297ms Jan 23 13:12:20.130: INFO: Terminating ReplicationController proxy-service-7vs77 pods took: 300.729019ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:12:26.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3975" for this suite. Jan 23 13:12:32.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:12:33.114: INFO: namespace proxy-3975 deletion completed in 6.170869975s • [SLOW TEST:31.459 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:12:33.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-682a2253-6ae9-41ba-975f-00f8d6b90714 STEP: Creating a pod to test consume configMaps Jan 23 13:12:33.207: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-14b37cd9-e286-43c1-8575-ad0c94421270" in namespace "projected-7157" to be "success or failure" Jan 23 13:12:33.221: INFO: Pod "pod-projected-configmaps-14b37cd9-e286-43c1-8575-ad0c94421270": Phase="Pending", Reason="", readiness=false. Elapsed: 13.24358ms Jan 23 13:12:35.254: INFO: Pod "pod-projected-configmaps-14b37cd9-e286-43c1-8575-ad0c94421270": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046200307s Jan 23 13:12:37.269: INFO: Pod "pod-projected-configmaps-14b37cd9-e286-43c1-8575-ad0c94421270": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061349499s Jan 23 13:12:39.283: INFO: Pod "pod-projected-configmaps-14b37cd9-e286-43c1-8575-ad0c94421270": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075538081s Jan 23 13:12:41.291: INFO: Pod "pod-projected-configmaps-14b37cd9-e286-43c1-8575-ad0c94421270": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083154261s Jan 23 13:12:43.301: INFO: Pod "pod-projected-configmaps-14b37cd9-e286-43c1-8575-ad0c94421270": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09334818s STEP: Saw pod success Jan 23 13:12:43.301: INFO: Pod "pod-projected-configmaps-14b37cd9-e286-43c1-8575-ad0c94421270" satisfied condition "success or failure" Jan 23 13:12:43.304: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-14b37cd9-e286-43c1-8575-ad0c94421270 container projected-configmap-volume-test: STEP: delete the pod Jan 23 13:12:43.484: INFO: Waiting for pod pod-projected-configmaps-14b37cd9-e286-43c1-8575-ad0c94421270 to disappear Jan 23 13:12:43.492: INFO: Pod pod-projected-configmaps-14b37cd9-e286-43c1-8575-ad0c94421270 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:12:43.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7157" for this suite. Jan 23 13:12:49.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:12:49.887: INFO: namespace projected-7157 deletion completed in 6.348156474s • [SLOW TEST:16.773 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:12:49.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jan 23 13:12:50.083: INFO: Waiting up to 5m0s for pod "var-expansion-af9ea62d-8e38-428e-8fc1-74174249b6bf" in namespace "var-expansion-3262" to be "success or failure" Jan 23 13:12:50.091: INFO: Pod "var-expansion-af9ea62d-8e38-428e-8fc1-74174249b6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21581ms Jan 23 13:12:52.110: INFO: Pod "var-expansion-af9ea62d-8e38-428e-8fc1-74174249b6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02692818s Jan 23 13:12:54.123: INFO: Pod "var-expansion-af9ea62d-8e38-428e-8fc1-74174249b6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040738993s Jan 23 13:12:56.131: INFO: Pod "var-expansion-af9ea62d-8e38-428e-8fc1-74174249b6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048564558s Jan 23 13:12:58.151: INFO: Pod "var-expansion-af9ea62d-8e38-428e-8fc1-74174249b6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06803628s Jan 23 13:13:00.165: INFO: Pod "var-expansion-af9ea62d-8e38-428e-8fc1-74174249b6bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082076595s STEP: Saw pod success Jan 23 13:13:00.165: INFO: Pod "var-expansion-af9ea62d-8e38-428e-8fc1-74174249b6bf" satisfied condition "success or failure" Jan 23 13:13:00.171: INFO: Trying to get logs from node iruya-node pod var-expansion-af9ea62d-8e38-428e-8fc1-74174249b6bf container dapi-container: STEP: delete the pod Jan 23 13:13:00.324: INFO: Waiting for pod var-expansion-af9ea62d-8e38-428e-8fc1-74174249b6bf to disappear Jan 23 13:13:00.429: INFO: Pod var-expansion-af9ea62d-8e38-428e-8fc1-74174249b6bf no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:13:00.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3262" for this suite. Jan 23 13:13:06.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:13:06.603: INFO: namespace var-expansion-3262 deletion completed in 6.163267498s • [SLOW TEST:16.715 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:13:06.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5126 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5126 STEP: Creating statefulset with conflicting port in namespace statefulset-5126 STEP: Waiting until pod test-pod will start running in namespace statefulset-5126 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5126 Jan 23 13:13:17.140: INFO: Observed stateful pod in namespace: statefulset-5126, name: ss-0, uid: a6676e71-2017-4405-b73b-47998d1865cc, status phase: Pending. Waiting for statefulset controller to delete. Jan 23 13:13:17.145: INFO: Observed stateful pod in namespace: statefulset-5126, name: ss-0, uid: a6676e71-2017-4405-b73b-47998d1865cc, status phase: Failed. Waiting for statefulset controller to delete. Jan 23 13:13:17.160: INFO: Observed stateful pod in namespace: statefulset-5126, name: ss-0, uid: a6676e71-2017-4405-b73b-47998d1865cc, status phase: Failed. Waiting for statefulset controller to delete. Jan 23 13:13:17.187: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5126 STEP: Removing pod with conflicting port in namespace statefulset-5126 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5126 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 23 13:13:36.232: INFO: Deleting all statefulset in ns statefulset-5126 Jan 23 13:13:36.236: INFO: Scaling statefulset ss to 0 Jan 23 13:13:56.287: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 13:13:56.293: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:13:56.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5126" for this suite. Jan 23 13:14:02.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:14:02.545: INFO: namespace statefulset-5126 deletion completed in 6.203826733s • [SLOW TEST:55.942 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:14:02.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 23 13:14:02.656: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19ffdc5b-9388-47b0-820f-fb23c81f56ea" in namespace "downward-api-8941" to be "success or failure" Jan 23 13:14:02.661: INFO: Pod "downwardapi-volume-19ffdc5b-9388-47b0-820f-fb23c81f56ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.38812ms Jan 23 13:14:04.679: INFO: Pod "downwardapi-volume-19ffdc5b-9388-47b0-820f-fb23c81f56ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022577981s Jan 23 13:14:06.688: INFO: Pod "downwardapi-volume-19ffdc5b-9388-47b0-820f-fb23c81f56ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030858105s Jan 23 13:14:08.696: INFO: Pod "downwardapi-volume-19ffdc5b-9388-47b0-820f-fb23c81f56ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039130857s Jan 23 13:14:10.706: INFO: Pod "downwardapi-volume-19ffdc5b-9388-47b0-820f-fb23c81f56ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049566343s Jan 23 13:14:12.715: INFO: Pod "downwardapi-volume-19ffdc5b-9388-47b0-820f-fb23c81f56ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057744247s STEP: Saw pod success Jan 23 13:14:12.715: INFO: Pod "downwardapi-volume-19ffdc5b-9388-47b0-820f-fb23c81f56ea" satisfied condition "success or failure" Jan 23 13:14:12.717: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-19ffdc5b-9388-47b0-820f-fb23c81f56ea container client-container: STEP: delete the pod Jan 23 13:14:12.760: INFO: Waiting for pod downwardapi-volume-19ffdc5b-9388-47b0-820f-fb23c81f56ea to disappear Jan 23 13:14:12.763: INFO: Pod downwardapi-volume-19ffdc5b-9388-47b0-820f-fb23c81f56ea no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:14:12.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8941" for this suite. Jan 23 13:14:18.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:14:18.997: INFO: namespace downward-api-8941 deletion completed in 6.229462403s • [SLOW TEST:16.451 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:14:18.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jan 23 13:14:19.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 23 13:14:22.190: INFO: stderr: "" Jan 23 13:14:22.190: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:14:22.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4420" for this suite. Jan 23 13:14:28.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:14:28.414: INFO: namespace kubectl-4420 deletion completed in 6.218122603s • [SLOW TEST:9.415 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:14:28.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-2015e4a9-60c6-48c7-b66b-14118b9c4803 STEP: Creating secret with name s-test-opt-upd-9987efaf-31f2-47c1-8319-78a18dada0a2 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2015e4a9-60c6-48c7-b66b-14118b9c4803 STEP: Updating secret s-test-opt-upd-9987efaf-31f2-47c1-8319-78a18dada0a2 STEP: Creating secret with name s-test-opt-create-6d72e996-4e7e-4008-bacc-7b5540f5fdb3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:14:47.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5844" for this suite. Jan 23 13:15:11.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:15:11.242: INFO: namespace projected-5844 deletion completed in 24.136398184s • [SLOW TEST:42.828 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:15:11.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-107eaf5c-f115-41a0-a0de-6df39e5c0951 STEP: Creating a pod to test consume secrets Jan 23 13:15:11.350: INFO: Waiting up to 5m0s for pod "pod-secrets-6b25f163-b968-4bd7-bb2f-fcdec8a98254" in namespace "secrets-9123" to be "success or failure" Jan 23 13:15:11.356: INFO: Pod "pod-secrets-6b25f163-b968-4bd7-bb2f-fcdec8a98254": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036924ms Jan 23 13:15:13.379: INFO: Pod "pod-secrets-6b25f163-b968-4bd7-bb2f-fcdec8a98254": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029387842s Jan 23 13:15:15.398: INFO: Pod "pod-secrets-6b25f163-b968-4bd7-bb2f-fcdec8a98254": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047897562s Jan 23 13:15:17.413: INFO: Pod "pod-secrets-6b25f163-b968-4bd7-bb2f-fcdec8a98254": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063162114s Jan 23 13:15:19.422: INFO: Pod "pod-secrets-6b25f163-b968-4bd7-bb2f-fcdec8a98254": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0722648s Jan 23 13:15:21.434: INFO: Pod "pod-secrets-6b25f163-b968-4bd7-bb2f-fcdec8a98254": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08355294s STEP: Saw pod success Jan 23 13:15:21.434: INFO: Pod "pod-secrets-6b25f163-b968-4bd7-bb2f-fcdec8a98254" satisfied condition "success or failure" Jan 23 13:15:21.438: INFO: Trying to get logs from node iruya-node pod pod-secrets-6b25f163-b968-4bd7-bb2f-fcdec8a98254 container secret-env-test: STEP: delete the pod Jan 23 13:15:21.516: INFO: Waiting for pod pod-secrets-6b25f163-b968-4bd7-bb2f-fcdec8a98254 to disappear Jan 23 13:15:21.533: INFO: Pod pod-secrets-6b25f163-b968-4bd7-bb2f-fcdec8a98254 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:15:21.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9123" for this suite. Jan 23 13:15:27.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:15:27.851: INFO: namespace secrets-9123 deletion completed in 6.309143103s • [SLOW TEST:16.608 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:15:27.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jan 23 13:15:27.937: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 23 13:15:27.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1578' Jan 23 13:15:28.382: INFO: stderr: "" Jan 23 13:15:28.382: INFO: stdout: "service/redis-slave created\n" Jan 23 13:15:28.383: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 23 13:15:28.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1578' Jan 23 13:15:28.782: INFO: stderr: "" Jan 23 13:15:28.783: INFO: stdout: "service/redis-master created\n" Jan 23 13:15:28.784: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 23 13:15:28.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1578' Jan 23 13:15:29.256: INFO: stderr: "" Jan 23 13:15:29.256: INFO: stdout: "service/frontend created\n" Jan 23 13:15:29.258: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 23 13:15:29.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1578' Jan 23 13:15:29.583: INFO: stderr: "" Jan 23 13:15:29.583: INFO: stdout: "deployment.apps/frontend created\n" Jan 23 13:15:29.584: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 23 13:15:29.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1578' Jan 23 13:15:30.058: INFO: stderr: "" Jan 23 13:15:30.058: INFO: stdout: "deployment.apps/redis-master created\n" Jan 23 13:15:30.059: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 23 13:15:30.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1578' Jan 23 13:15:31.310: INFO: stderr: "" Jan 23 13:15:31.310: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jan 23 13:15:31.310: INFO: Waiting for all frontend pods to be Running. Jan 23 13:15:56.364: INFO: Waiting for frontend to serve content. Jan 23 13:15:57.704: INFO: Trying to add a new entry to the guestbook. Jan 23 13:15:57.778: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 23 13:15:57.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1578' Jan 23 13:15:58.136: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 13:15:58.137: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 23 13:15:58.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1578' Jan 23 13:15:58.340: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 13:15:58.340: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 23 13:15:58.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1578' Jan 23 13:15:58.680: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 13:15:58.680: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 23 13:15:58.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1578' Jan 23 13:15:58.766: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 13:15:58.766: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 23 13:15:58.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1578' Jan 23 13:15:58.893: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 13:15:58.893: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 23 13:15:58.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1578' Jan 23 13:15:59.053: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 13:15:59.053: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:15:59.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1578" for this suite. Jan 23 13:16:51.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:16:51.338: INFO: namespace kubectl-1578 deletion completed in 52.269931s • [SLOW TEST:83.487 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:16:51.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 23 13:16:51.403: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:17:04.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1137" for this suite. Jan 23 13:17:10.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:17:10.480: INFO: namespace init-container-1137 deletion completed in 6.166806751s • [SLOW TEST:19.141 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:17:10.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 23 13:17:10.620: INFO: Create a RollingUpdate DaemonSet Jan 23 13:17:10.626: INFO: Check that daemon pods launch on every node of the cluster Jan 23 13:17:10.639: INFO: Number of nodes with available pods: 0 Jan 23 13:17:10.639: INFO: Node iruya-node is running more than one daemon pod Jan 23 13:17:11.658: INFO: Number of nodes with available pods: 0 Jan 23 13:17:11.659: INFO: Node iruya-node is running more than one daemon pod Jan 23 13:17:12.652: INFO: Number of nodes with available pods: 0 Jan 23 13:17:12.652: INFO: Node iruya-node is running more than one daemon pod Jan 23 13:17:13.657: INFO: Number of nodes with available pods: 0 Jan 23 13:17:13.657: INFO: Node iruya-node is running more than one daemon pod Jan 23 13:17:14.652: INFO: Number of nodes with available pods: 0 Jan 23 13:17:14.652: INFO: Node iruya-node is running more than one daemon pod Jan 23 13:17:16.897: INFO: Number of nodes with available pods: 0 Jan 23 13:17:16.897: INFO: Node iruya-node is running more than one daemon pod Jan 23 13:17:17.654: INFO: Number of nodes with available pods: 0 Jan 23 13:17:17.654: INFO: Node iruya-node is running more than one daemon pod Jan 23 13:17:18.655: INFO: Number of nodes with available pods: 0 Jan 23 13:17:18.656: INFO: Node iruya-node is running more than one daemon pod Jan 23 13:17:19.663: INFO: Number of nodes with available pods: 1 Jan 23 13:17:19.663: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 23 13:17:20.668: INFO: Number of nodes with available pods: 2 Jan 23 13:17:20.668: INFO: Number of running nodes: 2, number of available pods: 2 Jan 23 13:17:20.668: INFO: Update the DaemonSet to trigger a rollout Jan 23 13:17:20.733: INFO: Updating DaemonSet daemon-set Jan 23 13:17:36.819: INFO: Roll back the DaemonSet before rollout is complete Jan 23 13:17:36.830: INFO: Updating DaemonSet daemon-set Jan 23 13:17:36.830: INFO: Make sure DaemonSet rollback is complete Jan 23 13:17:36.836: INFO: Wrong image for pod: daemon-set-pz4r9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 23 13:17:36.836: INFO: Pod daemon-set-pz4r9 is not available Jan 23 13:17:37.878: INFO: Wrong image for pod: daemon-set-pz4r9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 23 13:17:37.878: INFO: Pod daemon-set-pz4r9 is not available Jan 23 13:17:38.871: INFO: Wrong image for pod: daemon-set-pz4r9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 23 13:17:38.872: INFO: Pod daemon-set-pz4r9 is not available Jan 23 13:17:39.865: INFO: Wrong image for pod: daemon-set-pz4r9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 23 13:17:39.865: INFO: Pod daemon-set-pz4r9 is not available Jan 23 13:17:40.880: INFO: Wrong image for pod: daemon-set-pz4r9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 23 13:17:40.881: INFO: Pod daemon-set-pz4r9 is not available Jan 23 13:17:41.871: INFO: Wrong image for pod: daemon-set-pz4r9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 23 13:17:41.871: INFO: Pod daemon-set-pz4r9 is not available Jan 23 13:17:42.880: INFO: Pod daemon-set-dgtmn is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-476, will wait for the garbage collector to delete the pods Jan 23 13:17:42.966: INFO: Deleting DaemonSet.extensions daemon-set took: 19.058372ms Jan 23 13:17:43.267: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.57392ms Jan 23 13:17:57.884: INFO: Number of nodes with available pods: 0 Jan 23 13:17:57.884: INFO: Number of running nodes: 0, number of available pods: 0 Jan 23 13:17:57.901: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-476/daemonsets","resourceVersion":"21558395"},"items":null} Jan 23 13:17:57.906: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-476/pods","resourceVersion":"21558395"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:17:57.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-476" for this suite. Jan 23 13:18:05.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:18:06.067: INFO: namespace daemonsets-476 deletion completed in 8.143950247s • [SLOW TEST:55.587 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:18:06.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 23 13:18:06.114: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:18:23.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2039" for this suite. Jan 23 13:18:45.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:18:45.720: INFO: namespace init-container-2039 deletion completed in 22.176677756s • [SLOW TEST:39.653 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:18:45.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 23 13:18:54.124: INFO: Waiting up to 5m0s for pod "client-envvars-1a0cbdc7-9fdf-4f92-b78a-fb76933fb4ee" in namespace "pods-8427" to be "success or failure" Jan 23 13:18:54.633: INFO: Pod "client-envvars-1a0cbdc7-9fdf-4f92-b78a-fb76933fb4ee": Phase="Pending", Reason="", readiness=false. Elapsed: 508.421845ms Jan 23 13:18:56.645: INFO: Pod "client-envvars-1a0cbdc7-9fdf-4f92-b78a-fb76933fb4ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.520129872s Jan 23 13:18:58.669: INFO: Pod "client-envvars-1a0cbdc7-9fdf-4f92-b78a-fb76933fb4ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.544625855s Jan 23 13:19:00.690: INFO: Pod "client-envvars-1a0cbdc7-9fdf-4f92-b78a-fb76933fb4ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.565723532s Jan 23 13:19:02.699: INFO: Pod "client-envvars-1a0cbdc7-9fdf-4f92-b78a-fb76933fb4ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.574360777s STEP: Saw pod success Jan 23 13:19:02.699: INFO: Pod "client-envvars-1a0cbdc7-9fdf-4f92-b78a-fb76933fb4ee" satisfied condition "success or failure" Jan 23 13:19:02.702: INFO: Trying to get logs from node iruya-node pod client-envvars-1a0cbdc7-9fdf-4f92-b78a-fb76933fb4ee container env3cont: STEP: delete the pod Jan 23 13:19:02.745: INFO: Waiting for pod client-envvars-1a0cbdc7-9fdf-4f92-b78a-fb76933fb4ee to disappear Jan 23 13:19:02.815: INFO: Pod client-envvars-1a0cbdc7-9fdf-4f92-b78a-fb76933fb4ee no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:19:02.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8427" for this suite. Jan 23 13:19:44.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:19:45.104: INFO: namespace pods-8427 deletion completed in 42.255561295s • [SLOW TEST:59.383 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:19:45.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jan 23 13:19:45.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 23 13:19:45.343: INFO: stderr: "" Jan 23 13:19:45.344: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 23 13:19:45.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1468" for this suite. Jan 23 13:19:51.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 23 13:19:51.563: INFO: namespace kubectl-1468 deletion completed in 6.174858566s • [SLOW TEST:6.459 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 23 13:19:51.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 23 13:19:51.699: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 23.486314ms)
Jan 23 13:19:51.707: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.854978ms)
Jan 23 13:19:51.711: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.102073ms)
Jan 23 13:19:51.719: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.01714ms)
Jan 23 13:19:51.723: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.565608ms)
Jan 23 13:19:51.728: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.518523ms)
Jan 23 13:19:51.732: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.31911ms)
Jan 23 13:19:51.737: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.69847ms)
Jan 23 13:19:51.772: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 35.23154ms)
Jan 23 13:19:51.780: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.970499ms)
Jan 23 13:19:51.791: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.492697ms)
Jan 23 13:19:51.800: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.747186ms)
Jan 23 13:19:51.806: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.130297ms)
Jan 23 13:19:51.812: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.812858ms)
Jan 23 13:19:51.819: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.744505ms)
Jan 23 13:19:51.828: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.533444ms)
Jan 23 13:19:51.835: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.051513ms)
Jan 23 13:19:51.841: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.95392ms)
Jan 23 13:19:51.849: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.654048ms)
Jan 23 13:19:51.856: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.10952ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:19:51.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8562" for this suite.
Jan 23 13:19:57.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:19:58.046: INFO: namespace proxy-8562 deletion completed in 6.181796101s

• [SLOW TEST:6.483 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:19:58.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan 23 13:19:58.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-273'
Jan 23 13:19:58.461: INFO: stderr: ""
Jan 23 13:19:58.461: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 23 13:19:58.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-273'
Jan 23 13:19:58.664: INFO: stderr: ""
Jan 23 13:19:58.665: INFO: stdout: "update-demo-nautilus-74fjd update-demo-nautilus-qvjpb "
Jan 23 13:19:58.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74fjd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-273'
Jan 23 13:19:58.793: INFO: stderr: ""
Jan 23 13:19:58.794: INFO: stdout: ""
Jan 23 13:19:58.794: INFO: update-demo-nautilus-74fjd is created but not running
Jan 23 13:20:03.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-273'
Jan 23 13:20:03.918: INFO: stderr: ""
Jan 23 13:20:03.918: INFO: stdout: "update-demo-nautilus-74fjd update-demo-nautilus-qvjpb "
Jan 23 13:20:03.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74fjd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-273'
Jan 23 13:20:04.636: INFO: stderr: ""
Jan 23 13:20:04.636: INFO: stdout: ""
Jan 23 13:20:04.637: INFO: update-demo-nautilus-74fjd is created but not running
Jan 23 13:20:09.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-273'
Jan 23 13:20:09.780: INFO: stderr: ""
Jan 23 13:20:09.780: INFO: stdout: "update-demo-nautilus-74fjd update-demo-nautilus-qvjpb "
Jan 23 13:20:09.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74fjd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-273'
Jan 23 13:20:09.875: INFO: stderr: ""
Jan 23 13:20:09.876: INFO: stdout: "true"
Jan 23 13:20:09.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-74fjd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-273'
Jan 23 13:20:09.972: INFO: stderr: ""
Jan 23 13:20:09.972: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 13:20:09.972: INFO: validating pod update-demo-nautilus-74fjd
Jan 23 13:20:09.989: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 13:20:09.989: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 13:20:09.989: INFO: update-demo-nautilus-74fjd is verified up and running
Jan 23 13:20:09.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qvjpb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-273'
Jan 23 13:20:10.081: INFO: stderr: ""
Jan 23 13:20:10.081: INFO: stdout: "true"
Jan 23 13:20:10.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qvjpb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-273'
Jan 23 13:20:10.161: INFO: stderr: ""
Jan 23 13:20:10.161: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 13:20:10.161: INFO: validating pod update-demo-nautilus-qvjpb
Jan 23 13:20:10.182: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 13:20:10.182: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 13:20:10.182: INFO: update-demo-nautilus-qvjpb is verified up and running
STEP: rolling-update to new replication controller
Jan 23 13:20:10.184: INFO: scanned /root for discovery docs: 
Jan 23 13:20:10.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-273'
Jan 23 13:20:40.981: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 23 13:20:40.981: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 23 13:20:40.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-273'
Jan 23 13:20:41.104: INFO: stderr: ""
Jan 23 13:20:41.104: INFO: stdout: "update-demo-kitten-4twnh update-demo-kitten-b6zgr "
Jan 23 13:20:41.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4twnh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-273'
Jan 23 13:20:41.189: INFO: stderr: ""
Jan 23 13:20:41.189: INFO: stdout: "true"
Jan 23 13:20:41.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4twnh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-273'
Jan 23 13:20:41.275: INFO: stderr: ""
Jan 23 13:20:41.275: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 23 13:20:41.275: INFO: validating pod update-demo-kitten-4twnh
Jan 23 13:20:41.306: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 23 13:20:41.306: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 23 13:20:41.306: INFO: update-demo-kitten-4twnh is verified up and running
Jan 23 13:20:41.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b6zgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-273'
Jan 23 13:20:41.409: INFO: stderr: ""
Jan 23 13:20:41.409: INFO: stdout: "true"
Jan 23 13:20:41.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b6zgr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-273'
Jan 23 13:20:41.535: INFO: stderr: ""
Jan 23 13:20:41.535: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 23 13:20:41.535: INFO: validating pod update-demo-kitten-b6zgr
Jan 23 13:20:41.562: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 23 13:20:41.562: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 23 13:20:41.562: INFO: update-demo-kitten-b6zgr is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:20:41.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-273" for this suite.
Jan 23 13:21:05.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:21:05.725: INFO: namespace kubectl-273 deletion completed in 24.156674755s

• [SLOW TEST:67.679 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:21:05.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-b612564d-781f-4a02-bb95-7ffa8d0e9822
STEP: Creating a pod to test consume secrets
Jan 23 13:21:05.882: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b33e651c-1d86-4dd8-85fa-769311923c08" in namespace "projected-9318" to be "success or failure"
Jan 23 13:21:05.896: INFO: Pod "pod-projected-secrets-b33e651c-1d86-4dd8-85fa-769311923c08": Phase="Pending", Reason="", readiness=false. Elapsed: 14.459717ms
Jan 23 13:21:07.916: INFO: Pod "pod-projected-secrets-b33e651c-1d86-4dd8-85fa-769311923c08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034483445s
Jan 23 13:21:09.928: INFO: Pod "pod-projected-secrets-b33e651c-1d86-4dd8-85fa-769311923c08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045656066s
Jan 23 13:21:11.973: INFO: Pod "pod-projected-secrets-b33e651c-1d86-4dd8-85fa-769311923c08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090996331s
Jan 23 13:21:13.982: INFO: Pod "pod-projected-secrets-b33e651c-1d86-4dd8-85fa-769311923c08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.09988412s
STEP: Saw pod success
Jan 23 13:21:13.982: INFO: Pod "pod-projected-secrets-b33e651c-1d86-4dd8-85fa-769311923c08" satisfied condition "success or failure"
Jan 23 13:21:13.985: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-b33e651c-1d86-4dd8-85fa-769311923c08 container projected-secret-volume-test: 
STEP: delete the pod
Jan 23 13:21:14.107: INFO: Waiting for pod pod-projected-secrets-b33e651c-1d86-4dd8-85fa-769311923c08 to disappear
Jan 23 13:21:14.116: INFO: Pod pod-projected-secrets-b33e651c-1d86-4dd8-85fa-769311923c08 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:21:14.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9318" for this suite.
Jan 23 13:21:20.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:21:20.291: INFO: namespace projected-9318 deletion completed in 6.169934092s

• [SLOW TEST:14.566 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:21:20.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2531.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2531.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2531.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2531.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2531.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2531.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 13:21:32.619: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2531/dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4: the server could not find the requested resource (get pods dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4)
Jan 23 13:21:32.636: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2531/dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4: the server could not find the requested resource (get pods dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4)
Jan 23 13:21:32.647: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-2531.svc.cluster.local from pod dns-2531/dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4: the server could not find the requested resource (get pods dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4)
Jan 23 13:21:32.653: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-2531/dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4: the server could not find the requested resource (get pods dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4)
Jan 23 13:21:32.657: INFO: Unable to read jessie_udp@PodARecord from pod dns-2531/dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4: the server could not find the requested resource (get pods dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4)
Jan 23 13:21:32.661: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2531/dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4: the server could not find the requested resource (get pods dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4)
Jan 23 13:21:32.661: INFO: Lookups using dns-2531/dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-2531.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 23 13:21:37.735: INFO: DNS probes using dns-2531/dns-test-7e2d70dd-d430-4dc5-b1a8-8879876c47e4 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:21:37.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2531" for this suite.
Jan 23 13:21:43.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:21:44.037: INFO: namespace dns-2531 deletion completed in 6.171041724s

• [SLOW TEST:23.745 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:21:44.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7095
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-7095
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7095
Jan 23 13:21:44.155: INFO: Found 0 stateful pods, waiting for 1
Jan 23 13:21:54.164: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 23 13:21:54.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 23 13:21:54.841: INFO: stderr: "I0123 13:21:54.393665    1002 log.go:172] (0xc00069cc60) (0xc0001f6be0) Create stream\nI0123 13:21:54.393837    1002 log.go:172] (0xc00069cc60) (0xc0001f6be0) Stream added, broadcasting: 1\nI0123 13:21:54.401799    1002 log.go:172] (0xc00069cc60) Reply frame received for 1\nI0123 13:21:54.401868    1002 log.go:172] (0xc00069cc60) (0xc0001f6c80) Create stream\nI0123 13:21:54.401882    1002 log.go:172] (0xc00069cc60) (0xc0001f6c80) Stream added, broadcasting: 3\nI0123 13:21:54.403552    1002 log.go:172] (0xc00069cc60) Reply frame received for 3\nI0123 13:21:54.403588    1002 log.go:172] (0xc00069cc60) (0xc000730000) Create stream\nI0123 13:21:54.403601    1002 log.go:172] (0xc00069cc60) (0xc000730000) Stream added, broadcasting: 5\nI0123 13:21:54.404904    1002 log.go:172] (0xc00069cc60) Reply frame received for 5\nI0123 13:21:54.609370    1002 log.go:172] (0xc00069cc60) Data frame received for 5\nI0123 13:21:54.609427    1002 log.go:172] (0xc000730000) (5) Data frame handling\nI0123 13:21:54.609461    1002 log.go:172] (0xc000730000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0123 13:21:54.664291    1002 log.go:172] (0xc00069cc60) Data frame received for 3\nI0123 13:21:54.664316    1002 log.go:172] (0xc0001f6c80) (3) Data frame handling\nI0123 13:21:54.664331    1002 log.go:172] (0xc0001f6c80) (3) Data frame sent\nI0123 13:21:54.820891    1002 log.go:172] (0xc00069cc60) (0xc0001f6c80) Stream removed, broadcasting: 3\nI0123 13:21:54.821445    1002 log.go:172] (0xc00069cc60) Data frame received for 1\nI0123 13:21:54.821548    1002 log.go:172] (0xc00069cc60) (0xc000730000) Stream removed, broadcasting: 5\nI0123 13:21:54.821630    1002 log.go:172] (0xc0001f6be0) (1) Data frame handling\nI0123 13:21:54.821688    1002 log.go:172] (0xc0001f6be0) (1) Data frame sent\nI0123 13:21:54.821722    1002 log.go:172] (0xc00069cc60) (0xc0001f6be0) Stream removed, broadcasting: 1\nI0123 13:21:54.821799    1002 log.go:172] (0xc00069cc60) Go away received\nI0123 13:21:54.822758    1002 log.go:172] (0xc00069cc60) (0xc0001f6be0) Stream removed, broadcasting: 1\nI0123 13:21:54.822786    1002 log.go:172] (0xc00069cc60) (0xc0001f6c80) Stream removed, broadcasting: 3\nI0123 13:21:54.822802    1002 log.go:172] (0xc00069cc60) (0xc000730000) Stream removed, broadcasting: 5\n"
Jan 23 13:21:54.842: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 23 13:21:54.842: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 23 13:21:54.857: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 23 13:21:54.857: INFO: Waiting for statefulset status.replicas updated to 0
Jan 23 13:21:54.937: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 23 13:21:54.938: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  }]
Jan 23 13:21:54.938: INFO: ss-1              Pending         []
Jan 23 13:21:54.938: INFO: 
Jan 23 13:21:54.938: INFO: StatefulSet ss has not reached scale 3, at 2
Jan 23 13:21:57.072: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.956369401s
Jan 23 13:21:58.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.821623055s
Jan 23 13:21:59.718: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.182848547s
Jan 23 13:22:00.768: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.175983482s
Jan 23 13:22:01.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.126431759s
Jan 23 13:22:03.234: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.943143399s
Jan 23 13:22:04.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 659.61265ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7095
Jan 23 13:22:05.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:22:05.815: INFO: stderr: "I0123 13:22:05.479074    1022 log.go:172] (0xc000118dc0) (0xc00065c820) Create stream\nI0123 13:22:05.479224    1022 log.go:172] (0xc000118dc0) (0xc00065c820) Stream added, broadcasting: 1\nI0123 13:22:05.485512    1022 log.go:172] (0xc000118dc0) Reply frame received for 1\nI0123 13:22:05.485543    1022 log.go:172] (0xc000118dc0) (0xc0008a0000) Create stream\nI0123 13:22:05.485552    1022 log.go:172] (0xc000118dc0) (0xc0008a0000) Stream added, broadcasting: 3\nI0123 13:22:05.487898    1022 log.go:172] (0xc000118dc0) Reply frame received for 3\nI0123 13:22:05.487935    1022 log.go:172] (0xc000118dc0) (0xc0008a00a0) Create stream\nI0123 13:22:05.487945    1022 log.go:172] (0xc000118dc0) (0xc0008a00a0) Stream added, broadcasting: 5\nI0123 13:22:05.490406    1022 log.go:172] (0xc000118dc0) Reply frame received for 5\nI0123 13:22:05.581401    1022 log.go:172] (0xc000118dc0) Data frame received for 5\nI0123 13:22:05.581450    1022 log.go:172] (0xc0008a00a0) (5) Data frame handling\nI0123 13:22:05.581471    1022 log.go:172] (0xc0008a00a0) (5) Data frame sent\n+ mv -vI0123 13:22:05.582152    1022 log.go:172] (0xc000118dc0) Data frame received for 5\nI0123 13:22:05.582225    1022 log.go:172] (0xc000118dc0) Data frame received for 3\nI0123 13:22:05.582250    1022 log.go:172] (0xc0008a0000) (3) Data frame handling\nI0123 13:22:05.582265    1022 log.go:172] (0xc0008a0000) (3) Data frame sent\nI0123 13:22:05.582300    1022 log.go:172] (0xc0008a00a0) (5) Data frame handling\nI0123 13:22:05.582309    1022 log.go:172] (0xc0008a00a0) (5) Data frame sent\n /tmp/index.html /usr/share/nginx/html/\nI0123 13:22:05.793098    1022 log.go:172] (0xc000118dc0) Data frame received for 1\nI0123 13:22:05.793217    1022 log.go:172] (0xc00065c820) (1) Data frame handling\nI0123 13:22:05.793245    1022 log.go:172] (0xc00065c820) (1) Data frame sent\nI0123 13:22:05.793996    1022 log.go:172] (0xc000118dc0) (0xc00065c820) Stream removed, broadcasting: 1\nI0123 13:22:05.794611    1022 log.go:172] (0xc000118dc0) (0xc0008a00a0) Stream removed, broadcasting: 5\nI0123 13:22:05.794697    1022 log.go:172] (0xc000118dc0) (0xc0008a0000) Stream removed, broadcasting: 3\nI0123 13:22:05.794757    1022 log.go:172] (0xc000118dc0) Go away received\nI0123 13:22:05.794869    1022 log.go:172] (0xc000118dc0) (0xc00065c820) Stream removed, broadcasting: 1\nI0123 13:22:05.794913    1022 log.go:172] (0xc000118dc0) (0xc0008a0000) Stream removed, broadcasting: 3\nI0123 13:22:05.794945    1022 log.go:172] (0xc000118dc0) (0xc0008a00a0) Stream removed, broadcasting: 5\n"
Jan 23 13:22:05.815: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 23 13:22:05.815: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 23 13:22:05.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:22:06.379: INFO: stderr: "I0123 13:22:06.052177    1043 log.go:172] (0xc00099c0b0) (0xc0009565a0) Create stream\nI0123 13:22:06.052335    1043 log.go:172] (0xc00099c0b0) (0xc0009565a0) Stream added, broadcasting: 1\nI0123 13:22:06.065938    1043 log.go:172] (0xc00099c0b0) Reply frame received for 1\nI0123 13:22:06.065990    1043 log.go:172] (0xc00099c0b0) (0xc000952000) Create stream\nI0123 13:22:06.066000    1043 log.go:172] (0xc00099c0b0) (0xc000952000) Stream added, broadcasting: 3\nI0123 13:22:06.067636    1043 log.go:172] (0xc00099c0b0) Reply frame received for 3\nI0123 13:22:06.067658    1043 log.go:172] (0xc00099c0b0) (0xc0009520a0) Create stream\nI0123 13:22:06.067667    1043 log.go:172] (0xc00099c0b0) (0xc0009520a0) Stream added, broadcasting: 5\nI0123 13:22:06.068604    1043 log.go:172] (0xc00099c0b0) Reply frame received for 5\nI0123 13:22:06.190145    1043 log.go:172] (0xc00099c0b0) Data frame received for 5\nI0123 13:22:06.190175    1043 log.go:172] (0xc0009520a0) (5) Data frame handling\nI0123 13:22:06.190185    1043 log.go:172] (0xc0009520a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0123 13:22:06.245146    1043 log.go:172] (0xc00099c0b0) Data frame received for 5\nI0123 13:22:06.245192    1043 log.go:172] (0xc0009520a0) (5) Data frame handling\nI0123 13:22:06.245221    1043 log.go:172] (0xc0009520a0) (5) Data frame sent\nI0123 13:22:06.245242    1043 log.go:172] (0xc00099c0b0) Data frame received for 3\nI0123 13:22:06.245262    1043 log.go:172] (0xc000952000) (3) Data frame handling\nI0123 13:22:06.245336    1043 log.go:172] (0xc000952000) (3) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0123 13:22:06.371118    1043 log.go:172] (0xc00099c0b0) Data frame received for 1\nI0123 13:22:06.371226    1043 log.go:172] (0xc0009565a0) (1) Data frame handling\nI0123 13:22:06.371264    1043 log.go:172] (0xc0009565a0) (1) Data frame sent\nI0123 13:22:06.371284    1043 log.go:172] (0xc00099c0b0) (0xc000952000) Stream removed, broadcasting: 3\nI0123 13:22:06.371345    1043 log.go:172] (0xc00099c0b0) (0xc0009520a0) Stream removed, broadcasting: 5\nI0123 13:22:06.371397    1043 log.go:172] (0xc00099c0b0) (0xc0009565a0) Stream removed, broadcasting: 1\nI0123 13:22:06.371434    1043 log.go:172] (0xc00099c0b0) Go away received\nI0123 13:22:06.372126    1043 log.go:172] (0xc00099c0b0) (0xc0009565a0) Stream removed, broadcasting: 1\nI0123 13:22:06.372216    1043 log.go:172] (0xc00099c0b0) (0xc000952000) Stream removed, broadcasting: 3\nI0123 13:22:06.372227    1043 log.go:172] (0xc00099c0b0) (0xc0009520a0) Stream removed, broadcasting: 5\n"
Jan 23 13:22:06.379: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 23 13:22:06.379: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 23 13:22:06.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:22:06.904: INFO: stderr: "I0123 13:22:06.632567    1061 log.go:172] (0xc000a044d0) (0xc0008188c0) Create stream\nI0123 13:22:06.632781    1061 log.go:172] (0xc000a044d0) (0xc0008188c0) Stream added, broadcasting: 1\nI0123 13:22:06.641293    1061 log.go:172] (0xc000a044d0) Reply frame received for 1\nI0123 13:22:06.641352    1061 log.go:172] (0xc000a044d0) (0xc000682280) Create stream\nI0123 13:22:06.641369    1061 log.go:172] (0xc000a044d0) (0xc000682280) Stream added, broadcasting: 3\nI0123 13:22:06.643144    1061 log.go:172] (0xc000a044d0) Reply frame received for 3\nI0123 13:22:06.643174    1061 log.go:172] (0xc000a044d0) (0xc000818960) Create stream\nI0123 13:22:06.643186    1061 log.go:172] (0xc000a044d0) (0xc000818960) Stream added, broadcasting: 5\nI0123 13:22:06.644459    1061 log.go:172] (0xc000a044d0) Reply frame received for 5\nI0123 13:22:06.747859    1061 log.go:172] (0xc000a044d0) Data frame received for 5\nI0123 13:22:06.747899    1061 log.go:172] (0xc000818960) (5) Data frame handling\nI0123 13:22:06.747916    1061 log.go:172] (0xc000818960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0123 13:22:06.747931    1061 log.go:172] (0xc000a044d0) Data frame received for 3\nI0123 13:22:06.747938    1061 log.go:172] (0xc000682280) (3) Data frame handling\nI0123 13:22:06.747947    1061 log.go:172] (0xc000682280) (3) Data frame sent\nI0123 13:22:06.893390    1061 log.go:172] (0xc000a044d0) (0xc000682280) Stream removed, broadcasting: 3\nI0123 13:22:06.893749    1061 log.go:172] (0xc000a044d0) Data frame received for 1\nI0123 13:22:06.893853    1061 log.go:172] (0xc000a044d0) (0xc000818960) Stream removed, broadcasting: 5\nI0123 13:22:06.893910    1061 log.go:172] (0xc0008188c0) (1) Data frame handling\nI0123 13:22:06.893934    1061 log.go:172] (0xc0008188c0) (1) Data frame sent\nI0123 13:22:06.893963    1061 log.go:172] (0xc000a044d0) (0xc0008188c0) Stream removed, broadcasting: 1\nI0123 13:22:06.893991    1061 log.go:172] (0xc000a044d0) Go away received\nI0123 13:22:06.895329    1061 log.go:172] (0xc000a044d0) (0xc0008188c0) Stream removed, broadcasting: 1\nI0123 13:22:06.895354    1061 log.go:172] (0xc000a044d0) (0xc000682280) Stream removed, broadcasting: 3\nI0123 13:22:06.895368    1061 log.go:172] (0xc000a044d0) (0xc000818960) Stream removed, broadcasting: 5\n"
Jan 23 13:22:06.905: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 23 13:22:06.905: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 23 13:22:06.913: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 13:22:06.913: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 13:22:06.913: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 23 13:22:06.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 23 13:22:07.295: INFO: stderr: "I0123 13:22:07.062019    1081 log.go:172] (0xc00012adc0) (0xc0002ba820) Create stream\nI0123 13:22:07.062147    1081 log.go:172] (0xc00012adc0) (0xc0002ba820) Stream added, broadcasting: 1\nI0123 13:22:07.067027    1081 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0123 13:22:07.067057    1081 log.go:172] (0xc00012adc0) (0xc0008c4000) Create stream\nI0123 13:22:07.067066    1081 log.go:172] (0xc00012adc0) (0xc0008c4000) Stream added, broadcasting: 3\nI0123 13:22:07.068395    1081 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0123 13:22:07.068417    1081 log.go:172] (0xc00012adc0) (0xc0002ba8c0) Create stream\nI0123 13:22:07.068423    1081 log.go:172] (0xc00012adc0) (0xc0002ba8c0) Stream added, broadcasting: 5\nI0123 13:22:07.069477    1081 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0123 13:22:07.166679    1081 log.go:172] (0xc00012adc0) Data frame received for 3\nI0123 13:22:07.166711    1081 log.go:172] (0xc0008c4000) (3) Data frame handling\nI0123 13:22:07.166729    1081 log.go:172] (0xc0008c4000) (3) Data frame sent\nI0123 13:22:07.166763    1081 log.go:172] (0xc00012adc0) Data frame received for 5\nI0123 13:22:07.166774    1081 log.go:172] (0xc0002ba8c0) (5) Data frame handling\nI0123 13:22:07.166789    1081 log.go:172] (0xc0002ba8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0123 13:22:07.287065    1081 log.go:172] (0xc00012adc0) (0xc0008c4000) Stream removed, broadcasting: 3\nI0123 13:22:07.287296    1081 log.go:172] (0xc00012adc0) Data frame received for 1\nI0123 13:22:07.287329    1081 log.go:172] (0xc0002ba820) (1) Data frame handling\nI0123 13:22:07.287356    1081 log.go:172] (0xc0002ba820) (1) Data frame sent\nI0123 13:22:07.287467    1081 log.go:172] (0xc00012adc0) (0xc0002ba820) Stream removed, broadcasting: 1\nI0123 13:22:07.287997    1081 log.go:172] (0xc00012adc0) (0xc0002ba8c0) Stream removed, broadcasting: 5\nI0123 13:22:07.288095    1081 log.go:172] (0xc00012adc0) Go away received\nI0123 13:22:07.288705    1081 log.go:172] (0xc00012adc0) (0xc0002ba820) Stream removed, broadcasting: 1\nI0123 13:22:07.288725    1081 log.go:172] (0xc00012adc0) (0xc0008c4000) Stream removed, broadcasting: 3\nI0123 13:22:07.288735    1081 log.go:172] (0xc00012adc0) (0xc0002ba8c0) Stream removed, broadcasting: 5\n"
Jan 23 13:22:07.296: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 23 13:22:07.296: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 23 13:22:07.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 23 13:22:07.636: INFO: stderr: "I0123 13:22:07.448936    1100 log.go:172] (0xc000a2a420) (0xc00036e820) Create stream\nI0123 13:22:07.449038    1100 log.go:172] (0xc000a2a420) (0xc00036e820) Stream added, broadcasting: 1\nI0123 13:22:07.456491    1100 log.go:172] (0xc000a2a420) Reply frame received for 1\nI0123 13:22:07.456522    1100 log.go:172] (0xc000a2a420) (0xc00036e000) Create stream\nI0123 13:22:07.456531    1100 log.go:172] (0xc000a2a420) (0xc00036e000) Stream added, broadcasting: 3\nI0123 13:22:07.457646    1100 log.go:172] (0xc000a2a420) Reply frame received for 3\nI0123 13:22:07.457703    1100 log.go:172] (0xc000a2a420) (0xc000632320) Create stream\nI0123 13:22:07.457716    1100 log.go:172] (0xc000a2a420) (0xc000632320) Stream added, broadcasting: 5\nI0123 13:22:07.459067    1100 log.go:172] (0xc000a2a420) Reply frame received for 5\nI0123 13:22:07.529559    1100 log.go:172] (0xc000a2a420) Data frame received for 5\nI0123 13:22:07.529589    1100 log.go:172] (0xc000632320) (5) Data frame handling\nI0123 13:22:07.529600    1100 log.go:172] (0xc000632320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0123 13:22:07.561407    1100 log.go:172] (0xc000a2a420) Data frame received for 3\nI0123 13:22:07.561450    1100 log.go:172] (0xc00036e000) (3) Data frame handling\nI0123 13:22:07.561478    1100 log.go:172] (0xc00036e000) (3) Data frame sent\nI0123 13:22:07.627041    1100 log.go:172] (0xc000a2a420) (0xc00036e000) Stream removed, broadcasting: 3\nI0123 13:22:07.627143    1100 log.go:172] (0xc000a2a420) Data frame received for 1\nI0123 13:22:07.627182    1100 log.go:172] (0xc000a2a420) (0xc000632320) Stream removed, broadcasting: 5\nI0123 13:22:07.627252    1100 log.go:172] (0xc00036e820) (1) Data frame handling\nI0123 13:22:07.627285    1100 log.go:172] (0xc00036e820) (1) Data frame sent\nI0123 13:22:07.627309    1100 log.go:172] (0xc000a2a420) (0xc00036e820) Stream removed, broadcasting: 1\nI0123 13:22:07.627335    1100 log.go:172] (0xc000a2a420) Go away received\nI0123 13:22:07.627865    1100 log.go:172] (0xc000a2a420) (0xc00036e820) Stream removed, broadcasting: 1\nI0123 13:22:07.627894    1100 log.go:172] (0xc000a2a420) (0xc00036e000) Stream removed, broadcasting: 3\nI0123 13:22:07.627905    1100 log.go:172] (0xc000a2a420) (0xc000632320) Stream removed, broadcasting: 5\n"
Jan 23 13:22:07.636: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 23 13:22:07.636: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 23 13:22:07.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 23 13:22:08.475: INFO: stderr: "I0123 13:22:08.156830    1122 log.go:172] (0xc0009b0420) (0xc00010e780) Create stream\nI0123 13:22:08.157000    1122 log.go:172] (0xc0009b0420) (0xc00010e780) Stream added, broadcasting: 1\nI0123 13:22:08.163864    1122 log.go:172] (0xc0009b0420) Reply frame received for 1\nI0123 13:22:08.163940    1122 log.go:172] (0xc0009b0420) (0xc0008aa000) Create stream\nI0123 13:22:08.163974    1122 log.go:172] (0xc0009b0420) (0xc0008aa000) Stream added, broadcasting: 3\nI0123 13:22:08.165368    1122 log.go:172] (0xc0009b0420) Reply frame received for 3\nI0123 13:22:08.165401    1122 log.go:172] (0xc0009b0420) (0xc0006383c0) Create stream\nI0123 13:22:08.165411    1122 log.go:172] (0xc0009b0420) (0xc0006383c0) Stream added, broadcasting: 5\nI0123 13:22:08.166476    1122 log.go:172] (0xc0009b0420) Reply frame received for 5\nI0123 13:22:08.269495    1122 log.go:172] (0xc0009b0420) Data frame received for 5\nI0123 13:22:08.269611    1122 log.go:172] (0xc0006383c0) (5) Data frame handling\nI0123 13:22:08.269655    1122 log.go:172] (0xc0006383c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0123 13:22:08.294563    1122 log.go:172] (0xc0009b0420) Data frame received for 3\nI0123 13:22:08.294594    1122 log.go:172] (0xc0008aa000) (3) Data frame handling\nI0123 13:22:08.294623    1122 log.go:172] (0xc0008aa000) (3) Data frame sent\nI0123 13:22:08.453630    1122 log.go:172] (0xc0009b0420) (0xc0008aa000) Stream removed, broadcasting: 3\nI0123 13:22:08.453840    1122 log.go:172] (0xc0009b0420) Data frame received for 1\nI0123 13:22:08.453928    1122 log.go:172] (0xc0009b0420) (0xc0006383c0) Stream removed, broadcasting: 5\nI0123 13:22:08.454020    1122 log.go:172] (0xc00010e780) (1) Data frame handling\nI0123 13:22:08.454130    1122 log.go:172] (0xc00010e780) (1) Data frame sent\nI0123 13:22:08.454468    1122 log.go:172] (0xc0009b0420) (0xc00010e780) Stream removed, broadcasting: 1\nI0123 13:22:08.457268    1122 log.go:172] (0xc0009b0420) Go away received\nI0123 13:22:08.462373    1122 log.go:172] (0xc0009b0420) (0xc00010e780) Stream removed, broadcasting: 1\nI0123 13:22:08.462516    1122 log.go:172] (0xc0009b0420) (0xc0008aa000) Stream removed, broadcasting: 3\nI0123 13:22:08.462669    1122 log.go:172] (0xc0009b0420) (0xc0006383c0) Stream removed, broadcasting: 5\n"
Jan 23 13:22:08.475: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 23 13:22:08.475: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 23 13:22:08.475: INFO: Waiting for statefulset status.replicas updated to 0
Jan 23 13:22:08.487: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 23 13:22:18.507: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 23 13:22:18.507: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 23 13:22:18.507: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 23 13:22:18.586: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 23 13:22:18.586: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  }]
Jan 23 13:22:18.586: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:18.586: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:18.586: INFO: 
Jan 23 13:22:18.586: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 23 13:22:20.259: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 23 13:22:20.260: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  }]
Jan 23 13:22:20.260: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:20.260: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:20.260: INFO: 
Jan 23 13:22:20.260: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 23 13:22:21.277: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 23 13:22:21.277: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  }]
Jan 23 13:22:21.277: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:21.278: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:21.278: INFO: 
Jan 23 13:22:21.278: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 23 13:22:22.316: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 23 13:22:22.316: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  }]
Jan 23 13:22:22.316: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:22.317: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:22.317: INFO: 
Jan 23 13:22:22.317: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 23 13:22:23.448: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 23 13:22:23.448: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  }]
Jan 23 13:22:23.448: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:23.448: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:23.448: INFO: 
Jan 23 13:22:23.448: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 23 13:22:24.502: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 23 13:22:24.502: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  }]
Jan 23 13:22:24.502: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:24.502: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:24.503: INFO: 
Jan 23 13:22:24.503: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 23 13:22:25.522: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 23 13:22:25.522: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  }]
Jan 23 13:22:25.522: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:25.522: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:25.523: INFO: 
Jan 23 13:22:25.523: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 23 13:22:26.538: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 23 13:22:26.538: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:44 +0000 UTC  }]
Jan 23 13:22:26.538: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:26.538: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:26.538: INFO: 
Jan 23 13:22:26.538: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 23 13:22:27.548: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 23 13:22:27.548: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:27.548: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:27.548: INFO: 
Jan 23 13:22:27.548: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 23 13:22:28.558: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 23 13:22:28.558: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:21:54 +0000 UTC  }]
Jan 23 13:22:28.559: INFO: 
Jan 23 13:22:28.559: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7095
Jan 23 13:22:29.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:22:29.850: INFO: rc: 1
Jan 23 13:22:29.850: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00327b6e0 exit status 1   true [0xc000011238 0xc000011360 0xc000011398] [0xc000011238 0xc000011360 0xc000011398] [0xc0000112e8 0xc000011390] [0xba6c50 0xba6c50] 0xc002675a40 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Jan 23 13:22:39.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:22:39.984: INFO: rc: 1
Jan 23 13:22:39.985: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0022a6090 exit status 1   true [0xc0000ebc90 0xc0000ebf30 0xc002176010] [0xc0000ebc90 0xc0000ebf30 0xc002176010] [0xc0000ebe10 0xc002176008] [0xba6c50 0xba6c50] 0xc002b2a240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:22:49.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:22:50.136: INFO: rc: 1
Jan 23 13:22:50.136: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0022a6150 exit status 1   true [0xc002176018 0xc002176030 0xc002176048] [0xc002176018 0xc002176030 0xc002176048] [0xc002176028 0xc002176040] [0xba6c50 0xba6c50] 0xc002b2a540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:23:00.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:23:00.225: INFO: rc: 1
Jan 23 13:23:00.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001ddc7b0 exit status 1   true [0xc00218e010 0xc00218e048 0xc00218e068] [0xc00218e010 0xc00218e048 0xc00218e068] [0xc00218e038 0xc00218e060] [0xba6c50 0xba6c50] 0xc0024e7b00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:23:10.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:23:10.382: INFO: rc: 1
Jan 23 13:23:10.383: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8bdd0 exit status 1   true [0xc000545948 0xc000545b40 0xc000545be8] [0xc000545948 0xc000545b40 0xc000545be8] [0xc000545a40 0xc000545be0] [0xba6c50 0xba6c50] 0xc001f77d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:23:20.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:23:20.526: INFO: rc: 1
Jan 23 13:23:20.527: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0022a6240 exit status 1   true [0xc002176050 0xc002176068 0xc002176080] [0xc002176050 0xc002176068 0xc002176080] [0xc002176060 0xc002176078] [0xba6c50 0xba6c50] 0xc002b2a840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:23:30.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:23:30.696: INFO: rc: 1
Jan 23 13:23:30.697: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8be90 exit status 1   true [0xc000545c18 0xc000545d50 0xc000545e88] [0xc000545c18 0xc000545d50 0xc000545e88] [0xc000545cf0 0xc000545d90] [0xba6c50 0xba6c50] 0xc0028e6000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:23:40.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:23:40.788: INFO: rc: 1
Jan 23 13:23:40.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0022a6330 exit status 1   true [0xc002176088 0xc0021760a0 0xc0021760b8] [0xc002176088 0xc0021760a0 0xc0021760b8] [0xc002176098 0xc0021760b0] [0xba6c50 0xba6c50] 0xc002b2ab40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:23:50.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:23:50.916: INFO: rc: 1
Jan 23 13:23:50.916: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001ddc8a0 exit status 1   true [0xc00218e070 0xc00218e0b0 0xc00218e0f0] [0xc00218e070 0xc00218e0b0 0xc00218e0f0] [0xc00218e0a0 0xc00218e0e8] [0xba6c50 0xba6c50] 0xc0024e7e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:24:00.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:24:01.073: INFO: rc: 1
Jan 23 13:24:01.074: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001ddc960 exit status 1   true [0xc00218e0f8 0xc00218e120 0xc00218e178] [0xc00218e0f8 0xc00218e120 0xc00218e178] [0xc00218e110 0xc00218e158] [0xba6c50 0xba6c50] 0xc00268c3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:24:11.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:24:11.207: INFO: rc: 1
Jan 23 13:24:11.208: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00327b800 exit status 1   true [0xc0000113a8 0xc000011430 0xc0000114f0] [0xc0000113a8 0xc000011430 0xc0000114f0] [0xc000011400 0xc0000114d8] [0xba6c50 0xba6c50] 0xc002675e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:24:21.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:24:23.475: INFO: rc: 1
Jan 23 13:24:23.476: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001df8000 exit status 1   true [0xc000545f68 0xc001c28018 0xc001c28070] [0xc000545f68 0xc001c28018 0xc001c28070] [0xc001c28008 0xc001c28050] [0xba6c50 0xba6c50] 0xc0028e6360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:24:33.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:24:33.639: INFO: rc: 1
Jan 23 13:24:33.640: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8a0c0 exit status 1   true [0xc0000ebd80 0xc000545200 0xc000545530] [0xc0000ebd80 0xc000545200 0xc000545530] [0xc0000ebf30 0xc0005454e8] [0xba6c50 0xba6c50] 0xc0024e7740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:24:43.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:24:43.803: INFO: rc: 1
Jan 23 13:24:43.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8a180 exit status 1   true [0xc000545608 0xc0005458f0 0xc000545948] [0xc000545608 0xc0005458f0 0xc000545948] [0xc000545840 0xc000545900] [0xba6c50 0xba6c50] 0xc0024e7a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:24:53.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:24:53.933: INFO: rc: 1
Jan 23 13:24:53.933: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001df8120 exit status 1   true [0xc001c28028 0xc001c280a0 0xc001c280f8] [0xc001c28028 0xc001c280a0 0xc001c280f8] [0xc001c28090 0xc001c280d8] [0xba6c50 0xba6c50] 0xc001f76780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:25:03.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:25:04.136: INFO: rc: 1
Jan 23 13:25:04.136: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8a240 exit status 1   true [0xc0005459b8 0xc000545bb8 0xc000545c18] [0xc0005459b8 0xc000545bb8 0xc000545c18] [0xc000545b40 0xc000545be8] [0xba6c50 0xba6c50] 0xc0024e7d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:25:14.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:25:14.289: INFO: rc: 1
Jan 23 13:25:14.290: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8a300 exit status 1   true [0xc000545c90 0xc000545d70 0xc000545ff0] [0xc000545c90 0xc000545d70 0xc000545ff0] [0xc000545d50 0xc000545e88] [0xba6c50 0xba6c50] 0xc0028e6060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:25:24.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:25:24.416: INFO: rc: 1
Jan 23 13:25:24.417: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8a3f0 exit status 1   true [0xc002176000 0xc002176018 0xc002176030] [0xc002176000 0xc002176018 0xc002176030] [0xc002176010 0xc002176028] [0xba6c50 0xba6c50] 0xc0028e6660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:25:34.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:25:34.542: INFO: rc: 1
Jan 23 13:25:34.542: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001ddc0f0 exit status 1   true [0xc00218e010 0xc00218e048 0xc00218e068] [0xc00218e010 0xc00218e048 0xc00218e068] [0xc00218e038 0xc00218e060] [0xba6c50 0xba6c50] 0xc002b2a240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:25:44.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:25:44.671: INFO: rc: 1
Jan 23 13:25:44.672: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8a4e0 exit status 1   true [0xc002176038 0xc002176050 0xc002176068] [0xc002176038 0xc002176050 0xc002176068] [0xc002176048 0xc002176060] [0xba6c50 0xba6c50] 0xc0028e6960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:25:54.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:25:54.766: INFO: rc: 1
Jan 23 13:25:54.767: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8a5a0 exit status 1   true [0xc002176070 0xc002176088 0xc0021760a0] [0xc002176070 0xc002176088 0xc0021760a0] [0xc002176080 0xc002176098] [0xba6c50 0xba6c50] 0xc0028e6c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:26:04.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:26:04.923: INFO: rc: 1
Jan 23 13:26:04.923: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8a690 exit status 1   true [0xc0021760a8 0xc0021760c0 0xc0021760d8] [0xc0021760a8 0xc0021760c0 0xc0021760d8] [0xc0021760b8 0xc0021760d0] [0xba6c50 0xba6c50] 0xc0028e6f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:26:14.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:26:15.051: INFO: rc: 1
Jan 23 13:26:15.052: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001ddc1e0 exit status 1   true [0xc00218e070 0xc00218e0b0 0xc00218e0f0] [0xc00218e070 0xc00218e0b0 0xc00218e0f0] [0xc00218e0a0 0xc00218e0e8] [0xba6c50 0xba6c50] 0xc002b2a540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:26:25.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:26:25.253: INFO: rc: 1
Jan 23 13:26:25.254: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00327a0c0 exit status 1   true [0xc0000109b0 0xc000010a10 0xc000010b98] [0xc0000109b0 0xc000010a10 0xc000010b98] [0xc000010a00 0xc000010b28] [0xba6c50 0xba6c50] 0xc00268c780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:26:35.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:26:35.390: INFO: rc: 1
Jan 23 13:26:35.391: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00327a180 exit status 1   true [0xc000545200 0xc000545530 0xc000545840] [0xc000545200 0xc000545530 0xc000545840] [0xc0005454e8 0xc000545790] [0xba6c50 0xba6c50] 0xc0024e7980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:26:45.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:26:45.554: INFO: rc: 1
Jan 23 13:26:45.555: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8a120 exit status 1   true [0xc0000ebc90 0xc0000ebf30 0xc002176010] [0xc0000ebc90 0xc0000ebf30 0xc002176010] [0xc0000ebe10 0xc002176008] [0xba6c50 0xba6c50] 0xc0028e6000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:26:55.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:26:55.697: INFO: rc: 1
Jan 23 13:26:55.698: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00327a270 exit status 1   true [0xc0005458f0 0xc000545948 0xc000545b40] [0xc0005458f0 0xc000545948 0xc000545b40] [0xc000545900 0xc000545a40] [0xba6c50 0xba6c50] 0xc0024e7c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:27:05.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:27:05.865: INFO: rc: 1
Jan 23 13:27:05.866: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001ddc090 exit status 1   true [0xc000010c20 0xc000010d78 0xc000010ec0] [0xc000010c20 0xc000010d78 0xc000010ec0] [0xc000010d00 0xc000010ea8] [0xba6c50 0xba6c50] 0xc00268d200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:27:15.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:27:16.033: INFO: rc: 1
Jan 23 13:27:16.033: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8a210 exit status 1   true [0xc002176018 0xc002176030 0xc002176048] [0xc002176018 0xc002176030 0xc002176048] [0xc002176028 0xc002176040] [0xba6c50 0xba6c50] 0xc0028e6360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:27:26.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:27:26.222: INFO: rc: 1
Jan 23 13:27:26.223: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002c8a360 exit status 1   true [0xc002176050 0xc002176068 0xc002176080] [0xc002176050 0xc002176068 0xc002176080] [0xc002176060 0xc002176078] [0xba6c50 0xba6c50] 0xc0028e6660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 23 13:27:36.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7095 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:27:36.360: INFO: rc: 1
Jan 23 13:27:36.360: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan 23 13:27:36.360: INFO: Scaling statefulset ss to 0
Jan 23 13:27:36.374: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 23 13:27:36.376: INFO: Deleting all statefulset in ns statefulset-7095
Jan 23 13:27:36.379: INFO: Scaling statefulset ss to 0
Jan 23 13:27:36.387: INFO: Waiting for statefulset status.replicas updated to 0
Jan 23 13:27:36.389: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:27:36.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7095" for this suite.
Jan 23 13:27:44.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:27:44.575: INFO: namespace statefulset-7095 deletion completed in 8.162568019s

• [SLOW TEST:360.537 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:27:44.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 23 13:28:00.784: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 23 13:28:00.803: INFO: Pod pod-with-prestop-http-hook still exists
Jan 23 13:28:02.803: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 23 13:28:02.814: INFO: Pod pod-with-prestop-http-hook still exists
Jan 23 13:28:04.804: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 23 13:28:04.813: INFO: Pod pod-with-prestop-http-hook still exists
Jan 23 13:28:06.804: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 23 13:28:06.814: INFO: Pod pod-with-prestop-http-hook still exists
Jan 23 13:28:08.803: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 23 13:28:09.114: INFO: Pod pod-with-prestop-http-hook still exists
Jan 23 13:28:10.804: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 23 13:28:10.812: INFO: Pod pod-with-prestop-http-hook still exists
Jan 23 13:28:12.804: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 23 13:28:12.815: INFO: Pod pod-with-prestop-http-hook still exists
Jan 23 13:28:14.804: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 23 13:28:14.815: INFO: Pod pod-with-prestop-http-hook still exists
Jan 23 13:28:16.804: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 23 13:28:16.818: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:28:16.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1224" for this suite.
Jan 23 13:28:38.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:28:39.087: INFO: namespace container-lifecycle-hook-1224 deletion completed in 22.209063675s

• [SLOW TEST:54.510 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:28:39.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan 23 13:28:39.149: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan 23 13:28:39.911: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 23 13:28:42.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:28:44.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:28:46.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:28:48.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:28:50.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382919, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:28:56.258: INFO: Waited 4.075019056s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:28:56.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-6219" for this suite.
Jan 23 13:29:02.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:29:02.994: INFO: namespace aggregator-6219 deletion completed in 6.2234237s

• [SLOW TEST:23.907 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:29:02.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 23 13:29:03.067: INFO: PodSpec: initContainers in spec.initContainers
Jan 23 13:30:01.331: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-390b034c-76d3-471e-a398-e2feba264c51", GenerateName:"", Namespace:"init-container-5670", SelfLink:"/api/v1/namespaces/init-container-5670/pods/pod-init-390b034c-76d3-471e-a398-e2feba264c51", UID:"0efabd7e-1ad7-47ab-a6d4-f3454b10be51", ResourceVersion:"21559989", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715382943, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"67732585"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bxr6w", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002bd2600), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bxr6w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bxr6w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bxr6w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0025e57c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002a7d0e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0025e5850)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0025e5870)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0025e5878), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0025e587c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382943, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382943, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382943, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715382943, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002c17ec0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00217d570)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00217d5e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://325edb63d9cc2b1747f54e3112649595d82f31e3dcdbb64a4471ae0dee4b8e23"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c17f00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c17ee0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:30:01.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5670" for this suite.
Jan 23 13:30:23.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:30:23.605: INFO: namespace init-container-5670 deletion completed in 22.243687665s

• [SLOW TEST:80.611 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:30:23.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 23 13:30:32.037: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-a81de3de-c478-4083-8106-3eac50c9ae12,GenerateName:,Namespace:events-7616,SelfLink:/api/v1/namespaces/events-7616/pods/send-events-a81de3de-c478-4083-8106-3eac50c9ae12,UID:d6d66184-2c5a-48c2-aa10-55f374a36a82,ResourceVersion:21560055,Generation:0,CreationTimestamp:2020-01-23 13:30:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 937011788,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-czd5c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-czd5c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-czd5c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00267e5e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00267e600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:30:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:30:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:30:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:30:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-23 13:30:24 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-23 13:30:30 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://9beeed5e5c4d47f082245cc8d7026ead77c51883458bfde19c4b254f5e844593}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 23 13:30:34.048: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 23 13:30:36.060: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:30:36.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7616" for this suite.
Jan 23 13:31:18.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:31:18.407: INFO: namespace events-7616 deletion completed in 42.320304156s

• [SLOW TEST:54.801 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:31:18.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-b8ebdda9-bdd4-4056-afce-10247790ffc5 in namespace container-probe-9201
Jan 23 13:31:26.547: INFO: Started pod busybox-b8ebdda9-bdd4-4056-afce-10247790ffc5 in namespace container-probe-9201
STEP: checking the pod's current state and verifying that restartCount is present
Jan 23 13:31:26.554: INFO: Initial restart count of pod busybox-b8ebdda9-bdd4-4056-afce-10247790ffc5 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:35:28.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9201" for this suite.
Jan 23 13:35:34.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:35:34.900: INFO: namespace container-probe-9201 deletion completed in 6.300763479s

• [SLOW TEST:256.492 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:35:34.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 23 13:35:44.128: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:35:45.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3743" for this suite.
Jan 23 13:37:33.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:37:33.367: INFO: namespace replicaset-3743 deletion completed in 1m48.147431229s

• [SLOW TEST:118.466 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:37:33.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 13:37:33.414: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:37:34.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9927" for this suite.
Jan 23 13:37:40.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:37:40.744: INFO: namespace custom-resource-definition-9927 deletion completed in 6.187441304s

• [SLOW TEST:7.377 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:37:40.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-972206d0-8c8a-4c93-b76c-27692ce29cdd
STEP: Creating a pod to test consume secrets
Jan 23 13:37:40.883: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-df6f7027-b7a1-42a8-b695-ae60e4eaac81" in namespace "projected-8225" to be "success or failure"
Jan 23 13:37:40.895: INFO: Pod "pod-projected-secrets-df6f7027-b7a1-42a8-b695-ae60e4eaac81": Phase="Pending", Reason="", readiness=false. Elapsed: 11.652594ms
Jan 23 13:37:42.902: INFO: Pod "pod-projected-secrets-df6f7027-b7a1-42a8-b695-ae60e4eaac81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018410127s
Jan 23 13:37:44.912: INFO: Pod "pod-projected-secrets-df6f7027-b7a1-42a8-b695-ae60e4eaac81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028866684s
Jan 23 13:37:46.926: INFO: Pod "pod-projected-secrets-df6f7027-b7a1-42a8-b695-ae60e4eaac81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04275787s
Jan 23 13:37:48.933: INFO: Pod "pod-projected-secrets-df6f7027-b7a1-42a8-b695-ae60e4eaac81": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049330598s
Jan 23 13:37:50.942: INFO: Pod "pod-projected-secrets-df6f7027-b7a1-42a8-b695-ae60e4eaac81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058210324s
STEP: Saw pod success
Jan 23 13:37:50.942: INFO: Pod "pod-projected-secrets-df6f7027-b7a1-42a8-b695-ae60e4eaac81" satisfied condition "success or failure"
Jan 23 13:37:50.947: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-df6f7027-b7a1-42a8-b695-ae60e4eaac81 container projected-secret-volume-test: 
STEP: delete the pod
Jan 23 13:37:50.988: INFO: Waiting for pod pod-projected-secrets-df6f7027-b7a1-42a8-b695-ae60e4eaac81 to disappear
Jan 23 13:37:50.997: INFO: Pod pod-projected-secrets-df6f7027-b7a1-42a8-b695-ae60e4eaac81 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:37:50.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8225" for this suite.
Jan 23 13:37:57.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:37:57.167: INFO: namespace projected-8225 deletion completed in 6.163913484s

• [SLOW TEST:16.421 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:37:57.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 13:37:57.224: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 23 13:37:57.383: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 23 13:38:02.392: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 23 13:38:04.405: INFO: Creating deployment "test-rolling-update-deployment"
Jan 23 13:38:04.415: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 23 13:38:04.435: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 23 13:38:06.457: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 23 13:38:06.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383484, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383484, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383484, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383484, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:38:08.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383484, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383484, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383484, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383484, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:38:10.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383484, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383484, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383484, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383484, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:38:12.478: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 23 13:38:12.499: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-3023,SelfLink:/apis/apps/v1/namespaces/deployment-3023/deployments/test-rolling-update-deployment,UID:aa413946-418b-48e7-a536-be453e6095f5,ResourceVersion:21560839,Generation:1,CreationTimestamp:2020-01-23 13:38:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-23 13:38:04 +0000 UTC 2020-01-23 13:38:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-23 13:38:11 +0000 UTC 2020-01-23 13:38:04 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 23 13:38:12.505: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-3023,SelfLink:/apis/apps/v1/namespaces/deployment-3023/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:c3e7f939-bf69-435e-8c0b-059e841da2d3,ResourceVersion:21560829,Generation:1,CreationTimestamp:2020-01-23 13:38:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment aa413946-418b-48e7-a536-be453e6095f5 0xc001a4b397 0xc001a4b398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 23 13:38:12.505: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 23 13:38:12.505: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-3023,SelfLink:/apis/apps/v1/namespaces/deployment-3023/replicasets/test-rolling-update-controller,UID:c790b4db-9cca-4976-b1d3-84a7d6d17651,ResourceVersion:21560837,Generation:2,CreationTimestamp:2020-01-23 13:37:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment aa413946-418b-48e7-a536-be453e6095f5 0xc001a4b2af 0xc001a4b2c0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 23 13:38:12.514: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-j4vz6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-j4vz6,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-3023,SelfLink:/api/v1/namespaces/deployment-3023/pods/test-rolling-update-deployment-79f6b9d75c-j4vz6,UID:d811bb76-8691-455f-b4dc-b559518c99a2,ResourceVersion:21560828,Generation:0,CreationTimestamp:2020-01-23 13:38:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c c3e7f939-bf69-435e-8c0b-059e841da2d3 0xc002359417 0xc002359418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qlh5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qlh5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-9qlh5 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002359490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023594b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:38:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:38:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:38:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:38:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-23 13:38:04 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-23 13:38:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e1b3a4e8f5ff34b00532adc339d16f31b11a7539aa582ef8d6e28c6101986798}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:38:12.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3023" for this suite.
Jan 23 13:38:18.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:38:18.743: INFO: namespace deployment-3023 deletion completed in 6.223501171s

• [SLOW TEST:21.577 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:38:18.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 13:38:18.869: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bafd124a-9c24-497c-9e12-a99bc3363970" in namespace "downward-api-8092" to be "success or failure"
Jan 23 13:38:18.889: INFO: Pod "downwardapi-volume-bafd124a-9c24-497c-9e12-a99bc3363970": Phase="Pending", Reason="", readiness=false. Elapsed: 20.477754ms
Jan 23 13:38:20.899: INFO: Pod "downwardapi-volume-bafd124a-9c24-497c-9e12-a99bc3363970": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030338116s
Jan 23 13:38:22.912: INFO: Pod "downwardapi-volume-bafd124a-9c24-497c-9e12-a99bc3363970": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043413984s
Jan 23 13:38:24.924: INFO: Pod "downwardapi-volume-bafd124a-9c24-497c-9e12-a99bc3363970": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054847806s
Jan 23 13:38:26.932: INFO: Pod "downwardapi-volume-bafd124a-9c24-497c-9e12-a99bc3363970": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063254987s
Jan 23 13:38:28.940: INFO: Pod "downwardapi-volume-bafd124a-9c24-497c-9e12-a99bc3363970": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070814365s
STEP: Saw pod success
Jan 23 13:38:28.940: INFO: Pod "downwardapi-volume-bafd124a-9c24-497c-9e12-a99bc3363970" satisfied condition "success or failure"
Jan 23 13:38:28.943: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bafd124a-9c24-497c-9e12-a99bc3363970 container client-container: 
STEP: delete the pod
Jan 23 13:38:28.986: INFO: Waiting for pod downwardapi-volume-bafd124a-9c24-497c-9e12-a99bc3363970 to disappear
Jan 23 13:38:29.090: INFO: Pod downwardapi-volume-bafd124a-9c24-497c-9e12-a99bc3363970 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:38:29.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8092" for this suite.
Jan 23 13:38:35.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:38:35.388: INFO: namespace downward-api-8092 deletion completed in 6.286755418s

• [SLOW TEST:16.643 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:38:35.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:38:42.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1262" for this suite.
Jan 23 13:38:48.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:38:48.259: INFO: namespace namespaces-1262 deletion completed in 6.207030092s
STEP: Destroying namespace "nsdeletetest-7933" for this suite.
Jan 23 13:38:48.262: INFO: Namespace nsdeletetest-7933 was already deleted
STEP: Destroying namespace "nsdeletetest-8719" for this suite.
Jan 23 13:38:54.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:38:54.411: INFO: namespace nsdeletetest-8719 deletion completed in 6.149267217s

• [SLOW TEST:19.023 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:38:54.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0123 13:39:37.571287       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 23 13:39:37.571: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:39:37.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7045" for this suite.
Jan 23 13:39:57.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:39:57.760: INFO: namespace gc-7045 deletion completed in 20.181048953s

• [SLOW TEST:63.349 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:39:57.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 13:39:57.964: INFO: Waiting up to 5m0s for pod "downwardapi-volume-952e007f-a798-4d81-b768-d035e7697983" in namespace "projected-142" to be "success or failure"
Jan 23 13:39:57.968: INFO: Pod "downwardapi-volume-952e007f-a798-4d81-b768-d035e7697983": Phase="Pending", Reason="", readiness=false. Elapsed: 3.879787ms
Jan 23 13:39:59.975: INFO: Pod "downwardapi-volume-952e007f-a798-4d81-b768-d035e7697983": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01124866s
Jan 23 13:40:01.993: INFO: Pod "downwardapi-volume-952e007f-a798-4d81-b768-d035e7697983": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029001158s
Jan 23 13:40:04.003: INFO: Pod "downwardapi-volume-952e007f-a798-4d81-b768-d035e7697983": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03923407s
Jan 23 13:40:06.013: INFO: Pod "downwardapi-volume-952e007f-a798-4d81-b768-d035e7697983": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049119829s
Jan 23 13:40:08.027: INFO: Pod "downwardapi-volume-952e007f-a798-4d81-b768-d035e7697983": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062833607s
STEP: Saw pod success
Jan 23 13:40:08.027: INFO: Pod "downwardapi-volume-952e007f-a798-4d81-b768-d035e7697983" satisfied condition "success or failure"
Jan 23 13:40:08.034: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-952e007f-a798-4d81-b768-d035e7697983 container client-container: 
STEP: delete the pod
Jan 23 13:40:08.339: INFO: Waiting for pod downwardapi-volume-952e007f-a798-4d81-b768-d035e7697983 to disappear
Jan 23 13:40:08.346: INFO: Pod downwardapi-volume-952e007f-a798-4d81-b768-d035e7697983 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:40:08.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-142" for this suite.
Jan 23 13:40:14.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:40:14.569: INFO: namespace projected-142 deletion completed in 6.214242804s

• [SLOW TEST:16.809 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:40:14.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5602
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 23 13:40:14.718: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 23 13:40:50.963: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5602 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 13:40:50.963: INFO: >>> kubeConfig: /root/.kube/config
I0123 13:40:51.055742       8 log.go:172] (0xc000834370) (0xc0011c0640) Create stream
I0123 13:40:51.055937       8 log.go:172] (0xc000834370) (0xc0011c0640) Stream added, broadcasting: 1
I0123 13:40:51.071179       8 log.go:172] (0xc000834370) Reply frame received for 1
I0123 13:40:51.071293       8 log.go:172] (0xc000834370) (0xc002132000) Create stream
I0123 13:40:51.071309       8 log.go:172] (0xc000834370) (0xc002132000) Stream added, broadcasting: 3
I0123 13:40:51.073711       8 log.go:172] (0xc000834370) Reply frame received for 3
I0123 13:40:51.073735       8 log.go:172] (0xc000834370) (0xc0021320a0) Create stream
I0123 13:40:51.073744       8 log.go:172] (0xc000834370) (0xc0021320a0) Stream added, broadcasting: 5
I0123 13:40:51.076122       8 log.go:172] (0xc000834370) Reply frame received for 5
I0123 13:40:51.326985       8 log.go:172] (0xc000834370) Data frame received for 3
I0123 13:40:51.327058       8 log.go:172] (0xc002132000) (3) Data frame handling
I0123 13:40:51.327077       8 log.go:172] (0xc002132000) (3) Data frame sent
I0123 13:40:51.446646       8 log.go:172] (0xc000834370) Data frame received for 1
I0123 13:40:51.446751       8 log.go:172] (0xc000834370) (0xc0021320a0) Stream removed, broadcasting: 5
I0123 13:40:51.446806       8 log.go:172] (0xc0011c0640) (1) Data frame handling
I0123 13:40:51.446835       8 log.go:172] (0xc0011c0640) (1) Data frame sent
I0123 13:40:51.446857       8 log.go:172] (0xc000834370) (0xc002132000) Stream removed, broadcasting: 3
I0123 13:40:51.446916       8 log.go:172] (0xc000834370) (0xc0011c0640) Stream removed, broadcasting: 1
I0123 13:40:51.446947       8 log.go:172] (0xc000834370) Go away received
I0123 13:40:51.447077       8 log.go:172] (0xc000834370) (0xc0011c0640) Stream removed, broadcasting: 1
I0123 13:40:51.447087       8 log.go:172] (0xc000834370) (0xc002132000) Stream removed, broadcasting: 3
I0123 13:40:51.447097       8 log.go:172] (0xc000834370) (0xc0021320a0) Stream removed, broadcasting: 5
Jan 23 13:40:51.447: INFO: Waiting for endpoints: map[]
Jan 23 13:40:51.457: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5602 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 13:40:51.457: INFO: >>> kubeConfig: /root/.kube/config
I0123 13:40:51.522518       8 log.go:172] (0xc0012c8790) (0xc002468be0) Create stream
I0123 13:40:51.522643       8 log.go:172] (0xc0012c8790) (0xc002468be0) Stream added, broadcasting: 1
I0123 13:40:51.528584       8 log.go:172] (0xc0012c8790) Reply frame received for 1
I0123 13:40:51.528617       8 log.go:172] (0xc0012c8790) (0xc002468d20) Create stream
I0123 13:40:51.528625       8 log.go:172] (0xc0012c8790) (0xc002468d20) Stream added, broadcasting: 3
I0123 13:40:51.530199       8 log.go:172] (0xc0012c8790) Reply frame received for 3
I0123 13:40:51.530222       8 log.go:172] (0xc0012c8790) (0xc0011c06e0) Create stream
I0123 13:40:51.530234       8 log.go:172] (0xc0012c8790) (0xc0011c06e0) Stream added, broadcasting: 5
I0123 13:40:51.533716       8 log.go:172] (0xc0012c8790) Reply frame received for 5
I0123 13:40:51.662822       8 log.go:172] (0xc0012c8790) Data frame received for 3
I0123 13:40:51.662867       8 log.go:172] (0xc002468d20) (3) Data frame handling
I0123 13:40:51.662896       8 log.go:172] (0xc002468d20) (3) Data frame sent
I0123 13:40:51.791966       8 log.go:172] (0xc0012c8790) (0xc0011c06e0) Stream removed, broadcasting: 5
I0123 13:40:51.792271       8 log.go:172] (0xc0012c8790) Data frame received for 1
I0123 13:40:51.792559       8 log.go:172] (0xc0012c8790) (0xc002468d20) Stream removed, broadcasting: 3
I0123 13:40:51.792767       8 log.go:172] (0xc002468be0) (1) Data frame handling
I0123 13:40:51.792816       8 log.go:172] (0xc002468be0) (1) Data frame sent
I0123 13:40:51.792835       8 log.go:172] (0xc0012c8790) (0xc002468be0) Stream removed, broadcasting: 1
I0123 13:40:51.792878       8 log.go:172] (0xc0012c8790) Go away received
I0123 13:40:51.793380       8 log.go:172] (0xc0012c8790) (0xc002468be0) Stream removed, broadcasting: 1
I0123 13:40:51.793422       8 log.go:172] (0xc0012c8790) (0xc002468d20) Stream removed, broadcasting: 3
I0123 13:40:51.793483       8 log.go:172] (0xc0012c8790) (0xc0011c06e0) Stream removed, broadcasting: 5
Jan 23 13:40:51.793: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:40:51.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5602" for this suite.
Jan 23 13:41:15.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:41:15.972: INFO: namespace pod-network-test-5602 deletion completed in 24.163162906s

• [SLOW TEST:61.403 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:41:15.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 13:41:16.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4368'
Jan 23 13:41:18.235: INFO: stderr: ""
Jan 23 13:41:18.235: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan 23 13:41:18.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4368'
Jan 23 13:41:18.795: INFO: stderr: ""
Jan 23 13:41:18.795: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 23 13:41:19.813: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 13:41:19.814: INFO: Found 0 / 1
Jan 23 13:41:20.804: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 13:41:20.804: INFO: Found 0 / 1
Jan 23 13:41:21.818: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 13:41:21.818: INFO: Found 0 / 1
Jan 23 13:41:22.812: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 13:41:22.812: INFO: Found 0 / 1
Jan 23 13:41:23.814: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 13:41:23.814: INFO: Found 0 / 1
Jan 23 13:41:24.811: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 13:41:24.811: INFO: Found 0 / 1
Jan 23 13:41:25.808: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 13:41:25.808: INFO: Found 1 / 1
Jan 23 13:41:25.808: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 23 13:41:25.813: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 13:41:25.813: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 23 13:41:25.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-444s4 --namespace=kubectl-4368'
Jan 23 13:41:25.966: INFO: stderr: ""
Jan 23 13:41:25.966: INFO: stdout: "Name:           redis-master-444s4\nNamespace:      kubectl-4368\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Thu, 23 Jan 2020 13:41:18 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://5e3cb461165029793343186998d9f5e705659c6fb8acaa2f4dd6a882ee5b2b99\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 23 Jan 2020 13:41:24 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ps4bc (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-ps4bc:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-ps4bc\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  7s    default-scheduler    Successfully assigned kubectl-4368/redis-master-444s4 to iruya-node\n  Normal  Pulled     3s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Jan 23 13:41:25.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4368'
Jan 23 13:41:26.093: INFO: stderr: ""
Jan 23 13:41:26.093: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-4368\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-444s4\n"
Jan 23 13:41:26.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4368'
Jan 23 13:41:26.195: INFO: stderr: ""
Jan 23 13:41:26.195: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-4368\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.103.112.184\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan 23 13:41:26.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Jan 23 13:41:26.307: INFO: stderr: ""
Jan 23 13:41:26.307: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Thu, 23 Jan 2020 13:41:12 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 23 Jan 2020 13:41:12 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 23 Jan 2020 13:41:12 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 23 Jan 2020 13:41:12 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         172d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         103d\n  kubectl-4368               redis-master-444s4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 23 13:41:26.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4368'
Jan 23 13:41:26.430: INFO: stderr: ""
Jan 23 13:41:26.430: INFO: stdout: "Name:         kubectl-4368\nLabels:       e2e-framework=kubectl\n              e2e-run=1cc389e5-13c5-45fa-8a68-84eacb028761\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:41:26.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4368" for this suite.
Jan 23 13:41:50.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:41:50.649: INFO: namespace kubectl-4368 deletion completed in 24.212303966s

• [SLOW TEST:34.676 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:41:50.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-b840ed09-b6ee-4d0e-ad73-3cf31b175f72
STEP: Creating a pod to test consume secrets
Jan 23 13:41:50.806: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-37130aa1-fe66-444f-8c5d-28f3b9e2d3b0" in namespace "projected-2068" to be "success or failure"
Jan 23 13:41:50.907: INFO: Pod "pod-projected-secrets-37130aa1-fe66-444f-8c5d-28f3b9e2d3b0": Phase="Pending", Reason="", readiness=false. Elapsed: 100.154123ms
Jan 23 13:41:52.915: INFO: Pod "pod-projected-secrets-37130aa1-fe66-444f-8c5d-28f3b9e2d3b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108902894s
Jan 23 13:41:54.931: INFO: Pod "pod-projected-secrets-37130aa1-fe66-444f-8c5d-28f3b9e2d3b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124257722s
Jan 23 13:41:56.948: INFO: Pod "pod-projected-secrets-37130aa1-fe66-444f-8c5d-28f3b9e2d3b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141376963s
Jan 23 13:41:58.967: INFO: Pod "pod-projected-secrets-37130aa1-fe66-444f-8c5d-28f3b9e2d3b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.160193592s
STEP: Saw pod success
Jan 23 13:41:58.967: INFO: Pod "pod-projected-secrets-37130aa1-fe66-444f-8c5d-28f3b9e2d3b0" satisfied condition "success or failure"
Jan 23 13:41:58.975: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-37130aa1-fe66-444f-8c5d-28f3b9e2d3b0 container projected-secret-volume-test: 
STEP: delete the pod
Jan 23 13:41:59.085: INFO: Waiting for pod pod-projected-secrets-37130aa1-fe66-444f-8c5d-28f3b9e2d3b0 to disappear
Jan 23 13:41:59.091: INFO: Pod pod-projected-secrets-37130aa1-fe66-444f-8c5d-28f3b9e2d3b0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:41:59.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2068" for this suite.
Jan 23 13:42:05.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:42:05.247: INFO: namespace projected-2068 deletion completed in 6.148509829s

• [SLOW TEST:14.597 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:42:05.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 23 13:42:23.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:23.603: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:42:25.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:25.613: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:42:27.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:27.618: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:42:29.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:29.615: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:42:31.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:31.612: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:42:33.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:33.616: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:42:35.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:35.617: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:42:37.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:37.613: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:42:39.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:39.613: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:42:41.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:41.618: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:42:43.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:43.615: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:42:45.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:45.615: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 23 13:42:47.604: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 23 13:42:47.613: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:42:47.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3197" for this suite.
Jan 23 13:43:09.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:43:09.977: INFO: namespace container-lifecycle-hook-3197 deletion completed in 22.23565305s

• [SLOW TEST:64.730 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:43:09.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4258
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 23 13:43:10.050: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 23 13:43:44.256: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-4258 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 13:43:44.256: INFO: >>> kubeConfig: /root/.kube/config
I0123 13:43:44.335809       8 log.go:172] (0xc0012be370) (0xc001f83a40) Create stream
I0123 13:43:44.335896       8 log.go:172] (0xc0012be370) (0xc001f83a40) Stream added, broadcasting: 1
I0123 13:43:44.344513       8 log.go:172] (0xc0012be370) Reply frame received for 1
I0123 13:43:44.344680       8 log.go:172] (0xc0012be370) (0xc0024d5d60) Create stream
I0123 13:43:44.344704       8 log.go:172] (0xc0012be370) (0xc0024d5d60) Stream added, broadcasting: 3
I0123 13:43:44.347327       8 log.go:172] (0xc0012be370) Reply frame received for 3
I0123 13:43:44.347392       8 log.go:172] (0xc0012be370) (0xc00129e780) Create stream
I0123 13:43:44.347403       8 log.go:172] (0xc0012be370) (0xc00129e780) Stream added, broadcasting: 5
I0123 13:43:44.350266       8 log.go:172] (0xc0012be370) Reply frame received for 5
I0123 13:43:44.521691       8 log.go:172] (0xc0012be370) Data frame received for 3
I0123 13:43:44.521809       8 log.go:172] (0xc0024d5d60) (3) Data frame handling
I0123 13:43:44.521851       8 log.go:172] (0xc0024d5d60) (3) Data frame sent
I0123 13:43:44.703648       8 log.go:172] (0xc0012be370) (0xc0024d5d60) Stream removed, broadcasting: 3
I0123 13:43:44.704059       8 log.go:172] (0xc0012be370) Data frame received for 1
I0123 13:43:44.704362       8 log.go:172] (0xc0012be370) (0xc00129e780) Stream removed, broadcasting: 5
I0123 13:43:44.704480       8 log.go:172] (0xc001f83a40) (1) Data frame handling
I0123 13:43:44.704555       8 log.go:172] (0xc001f83a40) (1) Data frame sent
I0123 13:43:44.704599       8 log.go:172] (0xc0012be370) (0xc001f83a40) Stream removed, broadcasting: 1
I0123 13:43:44.704640       8 log.go:172] (0xc0012be370) Go away received
I0123 13:43:44.704913       8 log.go:172] (0xc0012be370) (0xc001f83a40) Stream removed, broadcasting: 1
I0123 13:43:44.705002       8 log.go:172] (0xc0012be370) (0xc0024d5d60) Stream removed, broadcasting: 3
I0123 13:43:44.705043       8 log.go:172] (0xc0012be370) (0xc00129e780) Stream removed, broadcasting: 5
Jan 23 13:43:44.705: INFO: Waiting for endpoints: map[]
Jan 23 13:43:44.717: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-4258 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 13:43:44.717: INFO: >>> kubeConfig: /root/.kube/config
I0123 13:43:44.783031       8 log.go:172] (0xc001684580) (0xc000b910e0) Create stream
I0123 13:43:44.783318       8 log.go:172] (0xc001684580) (0xc000b910e0) Stream added, broadcasting: 1
I0123 13:43:44.790375       8 log.go:172] (0xc001684580) Reply frame received for 1
I0123 13:43:44.790405       8 log.go:172] (0xc001684580) (0xc00129e820) Create stream
I0123 13:43:44.790417       8 log.go:172] (0xc001684580) (0xc00129e820) Stream added, broadcasting: 3
I0123 13:43:44.791779       8 log.go:172] (0xc001684580) Reply frame received for 3
I0123 13:43:44.791802       8 log.go:172] (0xc001684580) (0xc000b912c0) Create stream
I0123 13:43:44.791813       8 log.go:172] (0xc001684580) (0xc000b912c0) Stream added, broadcasting: 5
I0123 13:43:44.793131       8 log.go:172] (0xc001684580) Reply frame received for 5
I0123 13:43:44.912946       8 log.go:172] (0xc001684580) Data frame received for 3
I0123 13:43:44.913051       8 log.go:172] (0xc00129e820) (3) Data frame handling
I0123 13:43:44.913073       8 log.go:172] (0xc00129e820) (3) Data frame sent
I0123 13:43:45.042797       8 log.go:172] (0xc001684580) Data frame received for 1
I0123 13:43:45.042976       8 log.go:172] (0xc001684580) (0xc000b912c0) Stream removed, broadcasting: 5
I0123 13:43:45.043064       8 log.go:172] (0xc000b910e0) (1) Data frame handling
I0123 13:43:45.043098       8 log.go:172] (0xc000b910e0) (1) Data frame sent
I0123 13:43:45.043111       8 log.go:172] (0xc001684580) (0xc00129e820) Stream removed, broadcasting: 3
I0123 13:43:45.043205       8 log.go:172] (0xc001684580) (0xc000b910e0) Stream removed, broadcasting: 1
I0123 13:43:45.043236       8 log.go:172] (0xc001684580) Go away received
I0123 13:43:45.043667       8 log.go:172] (0xc001684580) (0xc000b910e0) Stream removed, broadcasting: 1
I0123 13:43:45.043810       8 log.go:172] (0xc001684580) (0xc00129e820) Stream removed, broadcasting: 3
I0123 13:43:45.044383       8 log.go:172] (0xc001684580) (0xc000b912c0) Stream removed, broadcasting: 5
Jan 23 13:43:45.044: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:43:45.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4258" for this suite.
Jan 23 13:44:07.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:44:07.213: INFO: namespace pod-network-test-4258 deletion completed in 22.15518646s

• [SLOW TEST:57.236 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:44:07.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan 23 13:44:07.265: INFO: Waiting up to 5m0s for pod "client-containers-b3f4172f-5e5c-4a43-9f57-c23ca8ef5ba1" in namespace "containers-1068" to be "success or failure"
Jan 23 13:44:07.293: INFO: Pod "client-containers-b3f4172f-5e5c-4a43-9f57-c23ca8ef5ba1": Phase="Pending", Reason="", readiness=false. Elapsed: 27.881237ms
Jan 23 13:44:09.303: INFO: Pod "client-containers-b3f4172f-5e5c-4a43-9f57-c23ca8ef5ba1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037854897s
Jan 23 13:44:11.310: INFO: Pod "client-containers-b3f4172f-5e5c-4a43-9f57-c23ca8ef5ba1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044282016s
Jan 23 13:44:13.319: INFO: Pod "client-containers-b3f4172f-5e5c-4a43-9f57-c23ca8ef5ba1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053911567s
Jan 23 13:44:15.331: INFO: Pod "client-containers-b3f4172f-5e5c-4a43-9f57-c23ca8ef5ba1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065631994s
Jan 23 13:44:17.342: INFO: Pod "client-containers-b3f4172f-5e5c-4a43-9f57-c23ca8ef5ba1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076631405s
STEP: Saw pod success
Jan 23 13:44:17.342: INFO: Pod "client-containers-b3f4172f-5e5c-4a43-9f57-c23ca8ef5ba1" satisfied condition "success or failure"
Jan 23 13:44:17.346: INFO: Trying to get logs from node iruya-node pod client-containers-b3f4172f-5e5c-4a43-9f57-c23ca8ef5ba1 container test-container: 
STEP: delete the pod
Jan 23 13:44:17.438: INFO: Waiting for pod client-containers-b3f4172f-5e5c-4a43-9f57-c23ca8ef5ba1 to disappear
Jan 23 13:44:17.445: INFO: Pod client-containers-b3f4172f-5e5c-4a43-9f57-c23ca8ef5ba1 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:44:17.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1068" for this suite.
Jan 23 13:44:23.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:44:23.582: INFO: namespace containers-1068 deletion completed in 6.128790238s

• [SLOW TEST:16.368 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:44:23.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 13:44:23.705: INFO: Creating deployment "test-recreate-deployment"
Jan 23 13:44:23.720: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 23 13:44:23.734: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 23 13:44:25.747: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 23 13:44:25.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383863, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383863, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383863, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383863, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:44:27.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383863, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383863, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383863, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383863, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:44:29.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383863, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383863, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383863, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715383863, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 23 13:44:31.758: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 23 13:44:31.786: INFO: Updating deployment test-recreate-deployment
Jan 23 13:44:31.786: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 23 13:44:32.169: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2827,SelfLink:/apis/apps/v1/namespaces/deployment-2827/deployments/test-recreate-deployment,UID:dfb65bdd-ef92-4100-b628-2176c45202b2,ResourceVersion:21561920,Generation:2,CreationTimestamp:2020-01-23 13:44:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-23 13:44:32 +0000 UTC 2020-01-23 13:44:32 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-23 13:44:32 +0000 UTC 2020-01-23 13:44:23 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 23 13:44:32.177: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2827,SelfLink:/apis/apps/v1/namespaces/deployment-2827/replicasets/test-recreate-deployment-5c8c9cc69d,UID:429121fe-8e06-4240-95cc-1aea1f247611,ResourceVersion:21561917,Generation:1,CreationTimestamp:2020-01-23 13:44:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment dfb65bdd-ef92-4100-b628-2176c45202b2 0xc000dc4607 0xc000dc4608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 23 13:44:32.177: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 23 13:44:32.177: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2827,SelfLink:/apis/apps/v1/namespaces/deployment-2827/replicasets/test-recreate-deployment-6df85df6b9,UID:386decbc-5ece-4bc3-b6b7-2e1663c0a023,ResourceVersion:21561909,Generation:2,CreationTimestamp:2020-01-23 13:44:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment dfb65bdd-ef92-4100-b628-2176c45202b2 0xc000dc4777 0xc000dc4778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 23 13:44:32.183: INFO: Pod "test-recreate-deployment-5c8c9cc69d-jjkls" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-jjkls,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2827,SelfLink:/api/v1/namespaces/deployment-2827/pods/test-recreate-deployment-5c8c9cc69d-jjkls,UID:b6b40d96-b609-4b9d-bfb3-2b907f4c25bc,ResourceVersion:21561916,Generation:0,CreationTimestamp:2020-01-23 13:44:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 429121fe-8e06-4240-95cc-1aea1f247611 0xc000dc5247 0xc000dc5248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ckhqb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ckhqb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ckhqb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000dc52d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000dc52f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 13:44:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:44:32.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2827" for this suite.
Jan 23 13:44:38.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:44:38.385: INFO: namespace deployment-2827 deletion completed in 6.1917882s

• [SLOW TEST:14.803 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:44:38.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-db01abb0-04bc-412c-a139-7b4fe850ab7a
STEP: Creating a pod to test consume secrets
Jan 23 13:44:38.475: INFO: Waiting up to 5m0s for pod "pod-secrets-4eae3230-61c9-447e-9240-61517b568a02" in namespace "secrets-8670" to be "success or failure"
Jan 23 13:44:38.485: INFO: Pod "pod-secrets-4eae3230-61c9-447e-9240-61517b568a02": Phase="Pending", Reason="", readiness=false. Elapsed: 8.986555ms
Jan 23 13:44:40.499: INFO: Pod "pod-secrets-4eae3230-61c9-447e-9240-61517b568a02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022801736s
Jan 23 13:44:42.518: INFO: Pod "pod-secrets-4eae3230-61c9-447e-9240-61517b568a02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041826434s
Jan 23 13:44:44.531: INFO: Pod "pod-secrets-4eae3230-61c9-447e-9240-61517b568a02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054945954s
Jan 23 13:44:46.547: INFO: Pod "pod-secrets-4eae3230-61c9-447e-9240-61517b568a02": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071529624s
Jan 23 13:44:48.562: INFO: Pod "pod-secrets-4eae3230-61c9-447e-9240-61517b568a02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085999724s
STEP: Saw pod success
Jan 23 13:44:48.562: INFO: Pod "pod-secrets-4eae3230-61c9-447e-9240-61517b568a02" satisfied condition "success or failure"
Jan 23 13:44:48.568: INFO: Trying to get logs from node iruya-node pod pod-secrets-4eae3230-61c9-447e-9240-61517b568a02 container secret-volume-test: 
STEP: delete the pod
Jan 23 13:44:48.705: INFO: Waiting for pod pod-secrets-4eae3230-61c9-447e-9240-61517b568a02 to disappear
Jan 23 13:44:48.716: INFO: Pod pod-secrets-4eae3230-61c9-447e-9240-61517b568a02 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:44:48.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8670" for this suite.
Jan 23 13:44:54.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:44:54.925: INFO: namespace secrets-8670 deletion completed in 6.162793008s

• [SLOW TEST:16.539 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:44:54.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 13:44:55.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7194bc64-8161-4f65-abb2-a24c7cdb1508" in namespace "downward-api-1429" to be "success or failure"
Jan 23 13:44:55.562: INFO: Pod "downwardapi-volume-7194bc64-8161-4f65-abb2-a24c7cdb1508": Phase="Pending", Reason="", readiness=false. Elapsed: 60.733174ms
Jan 23 13:44:57.569: INFO: Pod "downwardapi-volume-7194bc64-8161-4f65-abb2-a24c7cdb1508": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068168054s
Jan 23 13:44:59.584: INFO: Pod "downwardapi-volume-7194bc64-8161-4f65-abb2-a24c7cdb1508": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082889678s
Jan 23 13:45:01.596: INFO: Pod "downwardapi-volume-7194bc64-8161-4f65-abb2-a24c7cdb1508": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095373607s
Jan 23 13:45:03.646: INFO: Pod "downwardapi-volume-7194bc64-8161-4f65-abb2-a24c7cdb1508": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.144607832s
STEP: Saw pod success
Jan 23 13:45:03.646: INFO: Pod "downwardapi-volume-7194bc64-8161-4f65-abb2-a24c7cdb1508" satisfied condition "success or failure"
Jan 23 13:45:03.651: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7194bc64-8161-4f65-abb2-a24c7cdb1508 container client-container: 
STEP: delete the pod
Jan 23 13:45:03.735: INFO: Waiting for pod downwardapi-volume-7194bc64-8161-4f65-abb2-a24c7cdb1508 to disappear
Jan 23 13:45:03.850: INFO: Pod downwardapi-volume-7194bc64-8161-4f65-abb2-a24c7cdb1508 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:45:03.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1429" for this suite.
Jan 23 13:45:09.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:45:10.062: INFO: namespace downward-api-1429 deletion completed in 6.191587436s

• [SLOW TEST:15.137 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:45:10.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 13:45:10.232: INFO: Waiting up to 5m0s for pod "downwardapi-volume-847e7614-cb10-4216-b730-7412f6f7d66a" in namespace "projected-8312" to be "success or failure"
Jan 23 13:45:10.252: INFO: Pod "downwardapi-volume-847e7614-cb10-4216-b730-7412f6f7d66a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.645928ms
Jan 23 13:45:12.260: INFO: Pod "downwardapi-volume-847e7614-cb10-4216-b730-7412f6f7d66a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028093608s
Jan 23 13:45:14.273: INFO: Pod "downwardapi-volume-847e7614-cb10-4216-b730-7412f6f7d66a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040569365s
Jan 23 13:45:16.293: INFO: Pod "downwardapi-volume-847e7614-cb10-4216-b730-7412f6f7d66a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061206089s
Jan 23 13:45:18.299: INFO: Pod "downwardapi-volume-847e7614-cb10-4216-b730-7412f6f7d66a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067511616s
STEP: Saw pod success
Jan 23 13:45:18.300: INFO: Pod "downwardapi-volume-847e7614-cb10-4216-b730-7412f6f7d66a" satisfied condition "success or failure"
Jan 23 13:45:18.304: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-847e7614-cb10-4216-b730-7412f6f7d66a container client-container: 
STEP: delete the pod
Jan 23 13:45:18.476: INFO: Waiting for pod downwardapi-volume-847e7614-cb10-4216-b730-7412f6f7d66a to disappear
Jan 23 13:45:18.503: INFO: Pod downwardapi-volume-847e7614-cb10-4216-b730-7412f6f7d66a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:45:18.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8312" for this suite.
Jan 23 13:45:24.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:45:24.694: INFO: namespace projected-8312 deletion completed in 6.180600704s

• [SLOW TEST:14.631 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:45:24.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan 23 13:45:24.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1432'
Jan 23 13:45:25.148: INFO: stderr: ""
Jan 23 13:45:25.148: INFO: stdout: "pod/pause created\n"
Jan 23 13:45:25.148: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 23 13:45:25.149: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1432" to be "running and ready"
Jan 23 13:45:25.181: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 32.817308ms
Jan 23 13:45:27.192: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043539426s
Jan 23 13:45:29.201: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052133245s
Jan 23 13:45:31.211: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062476304s
Jan 23 13:45:33.219: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.070722904s
Jan 23 13:45:33.219: INFO: Pod "pause" satisfied condition "running and ready"
Jan 23 13:45:33.219: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 23 13:45:33.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1432'
Jan 23 13:45:33.353: INFO: stderr: ""
Jan 23 13:45:33.353: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 23 13:45:33.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1432'
Jan 23 13:45:33.451: INFO: stderr: ""
Jan 23 13:45:33.451: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 23 13:45:33.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1432'
Jan 23 13:45:33.537: INFO: stderr: ""
Jan 23 13:45:33.537: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 23 13:45:33.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1432'
Jan 23 13:45:33.621: INFO: stderr: ""
Jan 23 13:45:33.622: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan 23 13:45:33.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1432'
Jan 23 13:45:33.747: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 23 13:45:33.748: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 23 13:45:33.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1432'
Jan 23 13:45:33.884: INFO: stderr: "No resources found.\n"
Jan 23 13:45:33.884: INFO: stdout: ""
Jan 23 13:45:33.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1432 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 23 13:45:34.025: INFO: stderr: ""
Jan 23 13:45:34.025: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:45:34.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1432" for this suite.
Jan 23 13:45:40.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:45:40.188: INFO: namespace kubectl-1432 deletion completed in 6.132608159s

• [SLOW TEST:15.494 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:45:40.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7361
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 23 13:45:40.313: INFO: Found 0 stateful pods, waiting for 3
Jan 23 13:45:50.343: INFO: Found 2 stateful pods, waiting for 3
Jan 23 13:46:00.351: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 13:46:00.351: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 13:46:00.351: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 23 13:46:10.331: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 13:46:10.331: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 13:46:10.331: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 13:46:10.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7361 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 23 13:46:11.047: INFO: stderr: "I0123 13:46:10.621828    2005 log.go:172] (0xc000904c60) (0xc0008c6b40) Create stream\nI0123 13:46:10.622045    2005 log.go:172] (0xc000904c60) (0xc0008c6b40) Stream added, broadcasting: 1\nI0123 13:46:10.637935    2005 log.go:172] (0xc000904c60) Reply frame received for 1\nI0123 13:46:10.638003    2005 log.go:172] (0xc000904c60) (0xc0008c6000) Create stream\nI0123 13:46:10.638022    2005 log.go:172] (0xc000904c60) (0xc0008c6000) Stream added, broadcasting: 3\nI0123 13:46:10.639677    2005 log.go:172] (0xc000904c60) Reply frame received for 3\nI0123 13:46:10.639734    2005 log.go:172] (0xc000904c60) (0xc000558280) Create stream\nI0123 13:46:10.639761    2005 log.go:172] (0xc000904c60) (0xc000558280) Stream added, broadcasting: 5\nI0123 13:46:10.641565    2005 log.go:172] (0xc000904c60) Reply frame received for 5\nI0123 13:46:10.890201    2005 log.go:172] (0xc000904c60) Data frame received for 5\nI0123 13:46:10.890298    2005 log.go:172] (0xc000558280) (5) Data frame handling\nI0123 13:46:10.890394    2005 log.go:172] (0xc000558280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0123 13:46:10.909927    2005 log.go:172] (0xc000904c60) Data frame received for 3\nI0123 13:46:10.909968    2005 log.go:172] (0xc0008c6000) (3) Data frame handling\nI0123 13:46:10.910002    2005 log.go:172] (0xc0008c6000) (3) Data frame sent\nI0123 13:46:11.032603    2005 log.go:172] (0xc000904c60) Data frame received for 1\nI0123 13:46:11.032737    2005 log.go:172] (0xc000904c60) (0xc000558280) Stream removed, broadcasting: 5\nI0123 13:46:11.032837    2005 log.go:172] (0xc0008c6b40) (1) Data frame handling\nI0123 13:46:11.032856    2005 log.go:172] (0xc0008c6b40) (1) Data frame sent\nI0123 13:46:11.033130    2005 log.go:172] (0xc000904c60) (0xc0008c6b40) Stream removed, broadcasting: 1\nI0123 13:46:11.034150    2005 log.go:172] (0xc000904c60) (0xc0008c6000) Stream removed, broadcasting: 3\nI0123 13:46:11.034224    2005 log.go:172] (0xc000904c60) Go away received\nI0123 13:46:11.034766    2005 log.go:172] (0xc000904c60) (0xc0008c6b40) Stream removed, broadcasting: 1\nI0123 13:46:11.034808    2005 log.go:172] (0xc000904c60) (0xc0008c6000) Stream removed, broadcasting: 3\nI0123 13:46:11.034827    2005 log.go:172] (0xc000904c60) (0xc000558280) Stream removed, broadcasting: 5\n"
Jan 23 13:46:11.048: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 23 13:46:11.048: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 23 13:46:21.110: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 23 13:46:31.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7361 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:46:31.710: INFO: stderr: "I0123 13:46:31.485624    2024 log.go:172] (0xc0007bc790) (0xc000696640) Create stream\nI0123 13:46:31.486648    2024 log.go:172] (0xc0007bc790) (0xc000696640) Stream added, broadcasting: 1\nI0123 13:46:31.489604    2024 log.go:172] (0xc0007bc790) Reply frame received for 1\nI0123 13:46:31.489642    2024 log.go:172] (0xc0007bc790) (0xc000020320) Create stream\nI0123 13:46:31.489656    2024 log.go:172] (0xc0007bc790) (0xc000020320) Stream added, broadcasting: 3\nI0123 13:46:31.490968    2024 log.go:172] (0xc0007bc790) Reply frame received for 3\nI0123 13:46:31.491001    2024 log.go:172] (0xc0007bc790) (0xc000982000) Create stream\nI0123 13:46:31.491010    2024 log.go:172] (0xc0007bc790) (0xc000982000) Stream added, broadcasting: 5\nI0123 13:46:31.491961    2024 log.go:172] (0xc0007bc790) Reply frame received for 5\nI0123 13:46:31.583476    2024 log.go:172] (0xc0007bc790) Data frame received for 3\nI0123 13:46:31.583534    2024 log.go:172] (0xc000020320) (3) Data frame handling\nI0123 13:46:31.583555    2024 log.go:172] (0xc000020320) (3) Data frame sent\nI0123 13:46:31.583626    2024 log.go:172] (0xc0007bc790) Data frame received for 5\nI0123 13:46:31.583667    2024 log.go:172] (0xc000982000) (5) Data frame handling\nI0123 13:46:31.583681    2024 log.go:172] (0xc000982000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0123 13:46:31.702493    2024 log.go:172] (0xc0007bc790) Data frame received for 1\nI0123 13:46:31.702585    2024 log.go:172] (0xc0007bc790) (0xc000020320) Stream removed, broadcasting: 3\nI0123 13:46:31.702633    2024 log.go:172] (0xc000696640) (1) Data frame handling\nI0123 13:46:31.702651    2024 log.go:172] (0xc000696640) (1) Data frame sent\nI0123 13:46:31.702683    2024 log.go:172] (0xc0007bc790) (0xc000982000) Stream removed, broadcasting: 5\nI0123 13:46:31.702716    2024 log.go:172] (0xc0007bc790) (0xc000696640) Stream removed, broadcasting: 1\nI0123 13:46:31.702733    2024 log.go:172] (0xc0007bc790) Go away received\nI0123 13:46:31.703302    2024 log.go:172] (0xc0007bc790) (0xc000696640) Stream removed, broadcasting: 1\nI0123 13:46:31.703331    2024 log.go:172] (0xc0007bc790) (0xc000020320) Stream removed, broadcasting: 3\nI0123 13:46:31.703343    2024 log.go:172] (0xc0007bc790) (0xc000982000) Stream removed, broadcasting: 5\n"
Jan 23 13:46:31.711: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 23 13:46:31.711: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 23 13:46:41.774: INFO: Waiting for StatefulSet statefulset-7361/ss2 to complete update
Jan 23 13:46:41.774: INFO: Waiting for Pod statefulset-7361/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 23 13:46:41.774: INFO: Waiting for Pod statefulset-7361/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 23 13:46:41.774: INFO: Waiting for Pod statefulset-7361/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 23 13:46:51.809: INFO: Waiting for StatefulSet statefulset-7361/ss2 to complete update
Jan 23 13:46:51.809: INFO: Waiting for Pod statefulset-7361/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 23 13:46:51.809: INFO: Waiting for Pod statefulset-7361/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 23 13:47:01.793: INFO: Waiting for StatefulSet statefulset-7361/ss2 to complete update
Jan 23 13:47:01.793: INFO: Waiting for Pod statefulset-7361/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 23 13:47:01.793: INFO: Waiting for Pod statefulset-7361/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 23 13:47:11.800: INFO: Waiting for StatefulSet statefulset-7361/ss2 to complete update
Jan 23 13:47:11.801: INFO: Waiting for Pod statefulset-7361/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 23 13:47:21.796: INFO: Waiting for StatefulSet statefulset-7361/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 23 13:47:31.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7361 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 23 13:47:32.237: INFO: stderr: "I0123 13:47:32.002224    2043 log.go:172] (0xc00075c0b0) (0xc0007e66e0) Create stream\nI0123 13:47:32.002297    2043 log.go:172] (0xc00075c0b0) (0xc0007e66e0) Stream added, broadcasting: 1\nI0123 13:47:32.005847    2043 log.go:172] (0xc00075c0b0) Reply frame received for 1\nI0123 13:47:32.005897    2043 log.go:172] (0xc00075c0b0) (0xc0007e6780) Create stream\nI0123 13:47:32.005905    2043 log.go:172] (0xc00075c0b0) (0xc0007e6780) Stream added, broadcasting: 3\nI0123 13:47:32.007366    2043 log.go:172] (0xc00075c0b0) Reply frame received for 3\nI0123 13:47:32.007404    2043 log.go:172] (0xc00075c0b0) (0xc0003541e0) Create stream\nI0123 13:47:32.007419    2043 log.go:172] (0xc00075c0b0) (0xc0003541e0) Stream added, broadcasting: 5\nI0123 13:47:32.009077    2043 log.go:172] (0xc00075c0b0) Reply frame received for 5\nI0123 13:47:32.126060    2043 log.go:172] (0xc00075c0b0) Data frame received for 5\nI0123 13:47:32.126898    2043 log.go:172] (0xc0003541e0) (5) Data frame handling\nI0123 13:47:32.126966    2043 log.go:172] (0xc0003541e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0123 13:47:32.150781    2043 log.go:172] (0xc00075c0b0) Data frame received for 3\nI0123 13:47:32.150852    2043 log.go:172] (0xc0007e6780) (3) Data frame handling\nI0123 13:47:32.150877    2043 log.go:172] (0xc0007e6780) (3) Data frame sent\nI0123 13:47:32.230515    2043 log.go:172] (0xc00075c0b0) Data frame received for 1\nI0123 13:47:32.230654    2043 log.go:172] (0xc00075c0b0) (0xc0003541e0) Stream removed, broadcasting: 5\nI0123 13:47:32.230697    2043 log.go:172] (0xc0007e66e0) (1) Data frame handling\nI0123 13:47:32.230714    2043 log.go:172] (0xc0007e66e0) (1) Data frame sent\nI0123 13:47:32.230781    2043 log.go:172] (0xc00075c0b0) (0xc0007e6780) Stream removed, broadcasting: 3\nI0123 13:47:32.230807    2043 log.go:172] (0xc00075c0b0) (0xc0007e66e0) Stream removed, broadcasting: 1\nI0123 13:47:32.230819    2043 log.go:172] (0xc00075c0b0) Go away received\nI0123 13:47:32.231677    2043 log.go:172] (0xc00075c0b0) (0xc0007e66e0) Stream removed, broadcasting: 1\nI0123 13:47:32.231695    2043 log.go:172] (0xc00075c0b0) (0xc0007e6780) Stream removed, broadcasting: 3\nI0123 13:47:32.231709    2043 log.go:172] (0xc00075c0b0) (0xc0003541e0) Stream removed, broadcasting: 5\n"
Jan 23 13:47:32.237: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 23 13:47:32.237: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 23 13:47:42.287: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 23 13:47:52.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7361 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 13:47:52.660: INFO: stderr: "I0123 13:47:52.486905    2064 log.go:172] (0xc000a100b0) (0xc000a00780) Create stream\nI0123 13:47:52.487097    2064 log.go:172] (0xc000a100b0) (0xc000a00780) Stream added, broadcasting: 1\nI0123 13:47:52.491020    2064 log.go:172] (0xc000a100b0) Reply frame received for 1\nI0123 13:47:52.491094    2064 log.go:172] (0xc000a100b0) (0xc000a00820) Create stream\nI0123 13:47:52.491105    2064 log.go:172] (0xc000a100b0) (0xc000a00820) Stream added, broadcasting: 3\nI0123 13:47:52.494958    2064 log.go:172] (0xc000a100b0) Reply frame received for 3\nI0123 13:47:52.494992    2064 log.go:172] (0xc000a100b0) (0xc0003f9b80) Create stream\nI0123 13:47:52.495001    2064 log.go:172] (0xc000a100b0) (0xc0003f9b80) Stream added, broadcasting: 5\nI0123 13:47:52.497579    2064 log.go:172] (0xc000a100b0) Reply frame received for 5\nI0123 13:47:52.581501    2064 log.go:172] (0xc000a100b0) Data frame received for 5\nI0123 13:47:52.581577    2064 log.go:172] (0xc0003f9b80) (5) Data frame handling\nI0123 13:47:52.581586    2064 log.go:172] (0xc0003f9b80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0123 13:47:52.583135    2064 log.go:172] (0xc000a100b0) Data frame received for 3\nI0123 13:47:52.583194    2064 log.go:172] (0xc000a00820) (3) Data frame handling\nI0123 13:47:52.583205    2064 log.go:172] (0xc000a00820) (3) Data frame sent\nI0123 13:47:52.654195    2064 log.go:172] (0xc000a100b0) (0xc000a00820) Stream removed, broadcasting: 3\nI0123 13:47:52.654361    2064 log.go:172] (0xc000a100b0) Data frame received for 1\nI0123 13:47:52.654383    2064 log.go:172] (0xc000a00780) (1) Data frame handling\nI0123 13:47:52.654391    2064 log.go:172] (0xc000a100b0) (0xc0003f9b80) Stream removed, broadcasting: 5\nI0123 13:47:52.654410    2064 log.go:172] (0xc000a00780) (1) Data frame sent\nI0123 13:47:52.654427    2064 log.go:172] (0xc000a100b0) (0xc000a00780) Stream removed, broadcasting: 1\nI0123 13:47:52.654445    2064 log.go:172] (0xc000a100b0) Go away received\nI0123 13:47:52.654941    2064 log.go:172] (0xc000a100b0) (0xc000a00780) Stream removed, broadcasting: 1\nI0123 13:47:52.654953    2064 log.go:172] (0xc000a100b0) (0xc000a00820) Stream removed, broadcasting: 3\nI0123 13:47:52.654957    2064 log.go:172] (0xc000a100b0) (0xc0003f9b80) Stream removed, broadcasting: 5\n"
Jan 23 13:47:52.660: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 23 13:47:52.660: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 23 13:48:02.687: INFO: Waiting for StatefulSet statefulset-7361/ss2 to complete update
Jan 23 13:48:02.687: INFO: Waiting for Pod statefulset-7361/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 23 13:48:02.687: INFO: Waiting for Pod statefulset-7361/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 23 13:48:12.709: INFO: Waiting for StatefulSet statefulset-7361/ss2 to complete update
Jan 23 13:48:12.709: INFO: Waiting for Pod statefulset-7361/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 23 13:48:12.709: INFO: Waiting for Pod statefulset-7361/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 23 13:48:22.808: INFO: Waiting for StatefulSet statefulset-7361/ss2 to complete update
Jan 23 13:48:22.808: INFO: Waiting for Pod statefulset-7361/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 23 13:48:32.713: INFO: Waiting for StatefulSet statefulset-7361/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 23 13:48:42.704: INFO: Deleting all statefulset in ns statefulset-7361
Jan 23 13:48:42.709: INFO: Scaling statefulset ss2 to 0
Jan 23 13:49:22.747: INFO: Waiting for statefulset status.replicas updated to 0
Jan 23 13:49:22.752: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:49:22.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7361" for this suite.
Jan 23 13:49:30.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:49:30.963: INFO: namespace statefulset-7361 deletion completed in 8.165658405s

• [SLOW TEST:230.774 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:49:30.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-92d56186-8aa9-4a9b-a3a5-c32661bebb82
STEP: Creating a pod to test consume secrets
Jan 23 13:49:31.077: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c0e7a4e7-3c6d-4a54-9b1d-d72247c38b91" in namespace "projected-9678" to be "success or failure"
Jan 23 13:49:31.088: INFO: Pod "pod-projected-secrets-c0e7a4e7-3c6d-4a54-9b1d-d72247c38b91": Phase="Pending", Reason="", readiness=false. Elapsed: 10.906419ms
Jan 23 13:49:33.134: INFO: Pod "pod-projected-secrets-c0e7a4e7-3c6d-4a54-9b1d-d72247c38b91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05601566s
Jan 23 13:49:35.154: INFO: Pod "pod-projected-secrets-c0e7a4e7-3c6d-4a54-9b1d-d72247c38b91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07677248s
Jan 23 13:49:37.162: INFO: Pod "pod-projected-secrets-c0e7a4e7-3c6d-4a54-9b1d-d72247c38b91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084180678s
Jan 23 13:49:39.185: INFO: Pod "pod-projected-secrets-c0e7a4e7-3c6d-4a54-9b1d-d72247c38b91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107440316s
Jan 23 13:49:41.192: INFO: Pod "pod-projected-secrets-c0e7a4e7-3c6d-4a54-9b1d-d72247c38b91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114452168s
STEP: Saw pod success
Jan 23 13:49:41.192: INFO: Pod "pod-projected-secrets-c0e7a4e7-3c6d-4a54-9b1d-d72247c38b91" satisfied condition "success or failure"
Jan 23 13:49:41.195: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c0e7a4e7-3c6d-4a54-9b1d-d72247c38b91 container projected-secret-volume-test: 
STEP: delete the pod
Jan 23 13:49:41.355: INFO: Waiting for pod pod-projected-secrets-c0e7a4e7-3c6d-4a54-9b1d-d72247c38b91 to disappear
Jan 23 13:49:41.396: INFO: Pod pod-projected-secrets-c0e7a4e7-3c6d-4a54-9b1d-d72247c38b91 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:49:41.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9678" for this suite.
Jan 23 13:49:47.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:49:47.582: INFO: namespace projected-9678 deletion completed in 6.177302893s

• [SLOW TEST:16.618 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:49:47.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 13:49:47.675: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0cc7b9f7-7b1c-49f2-9cca-5aaf61bc995c" in namespace "projected-4265" to be "success or failure"
Jan 23 13:49:47.681: INFO: Pod "downwardapi-volume-0cc7b9f7-7b1c-49f2-9cca-5aaf61bc995c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.939124ms
Jan 23 13:49:49.691: INFO: Pod "downwardapi-volume-0cc7b9f7-7b1c-49f2-9cca-5aaf61bc995c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015737412s
Jan 23 13:49:51.697: INFO: Pod "downwardapi-volume-0cc7b9f7-7b1c-49f2-9cca-5aaf61bc995c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022128064s
Jan 23 13:49:53.708: INFO: Pod "downwardapi-volume-0cc7b9f7-7b1c-49f2-9cca-5aaf61bc995c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033345296s
Jan 23 13:49:55.720: INFO: Pod "downwardapi-volume-0cc7b9f7-7b1c-49f2-9cca-5aaf61bc995c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045296806s
STEP: Saw pod success
Jan 23 13:49:55.721: INFO: Pod "downwardapi-volume-0cc7b9f7-7b1c-49f2-9cca-5aaf61bc995c" satisfied condition "success or failure"
Jan 23 13:49:55.727: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0cc7b9f7-7b1c-49f2-9cca-5aaf61bc995c container client-container: 
STEP: delete the pod
Jan 23 13:49:55.811: INFO: Waiting for pod downwardapi-volume-0cc7b9f7-7b1c-49f2-9cca-5aaf61bc995c to disappear
Jan 23 13:49:55.838: INFO: Pod downwardapi-volume-0cc7b9f7-7b1c-49f2-9cca-5aaf61bc995c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:49:55.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4265" for this suite.
Jan 23 13:50:01.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:50:02.050: INFO: namespace projected-4265 deletion completed in 6.199548353s

• [SLOW TEST:14.467 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:50:02.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 23 13:50:02.177: INFO: Waiting up to 5m0s for pod "pod-a8ac9a02-f4d7-4bc9-97ac-e55496edc774" in namespace "emptydir-9602" to be "success or failure"
Jan 23 13:50:02.211: INFO: Pod "pod-a8ac9a02-f4d7-4bc9-97ac-e55496edc774": Phase="Pending", Reason="", readiness=false. Elapsed: 33.513727ms
Jan 23 13:50:04.274: INFO: Pod "pod-a8ac9a02-f4d7-4bc9-97ac-e55496edc774": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096229742s
Jan 23 13:50:06.281: INFO: Pod "pod-a8ac9a02-f4d7-4bc9-97ac-e55496edc774": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103306021s
Jan 23 13:50:08.288: INFO: Pod "pod-a8ac9a02-f4d7-4bc9-97ac-e55496edc774": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110649891s
Jan 23 13:50:10.298: INFO: Pod "pod-a8ac9a02-f4d7-4bc9-97ac-e55496edc774": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.119894519s
STEP: Saw pod success
Jan 23 13:50:10.298: INFO: Pod "pod-a8ac9a02-f4d7-4bc9-97ac-e55496edc774" satisfied condition "success or failure"
Jan 23 13:50:10.303: INFO: Trying to get logs from node iruya-node pod pod-a8ac9a02-f4d7-4bc9-97ac-e55496edc774 container test-container: 
STEP: delete the pod
Jan 23 13:50:10.428: INFO: Waiting for pod pod-a8ac9a02-f4d7-4bc9-97ac-e55496edc774 to disappear
Jan 23 13:50:10.561: INFO: Pod pod-a8ac9a02-f4d7-4bc9-97ac-e55496edc774 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:50:10.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9602" for this suite.
Jan 23 13:50:16.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:50:16.860: INFO: namespace emptydir-9602 deletion completed in 6.28448637s

• [SLOW TEST:14.809 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:50:16.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 23 13:50:25.098: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:50:25.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2255" for this suite.
Jan 23 13:50:31.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:50:31.346: INFO: namespace container-runtime-2255 deletion completed in 6.186859321s

• [SLOW TEST:14.485 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:50:31.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 23 13:50:42.100: INFO: Successfully updated pod "annotationupdatefb5a4165-4a6a-4ac4-95aa-d87328861943"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:50:44.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9573" for this suite.
Jan 23 13:51:06.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:51:06.378: INFO: namespace projected-9573 deletion completed in 22.172374423s

• [SLOW TEST:35.032 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:51:06.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:51:14.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4228" for this suite.
Jan 23 13:51:58.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:51:58.874: INFO: namespace kubelet-test-4228 deletion completed in 44.231531477s

• [SLOW TEST:52.495 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:51:58.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 13:51:58.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf6f06ce-87fc-474e-aab4-1ba3a357ed43" in namespace "projected-3651" to be "success or failure"
Jan 23 13:51:58.997: INFO: Pod "downwardapi-volume-bf6f06ce-87fc-474e-aab4-1ba3a357ed43": Phase="Pending", Reason="", readiness=false. Elapsed: 11.695638ms
Jan 23 13:52:01.005: INFO: Pod "downwardapi-volume-bf6f06ce-87fc-474e-aab4-1ba3a357ed43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019820174s
Jan 23 13:52:03.021: INFO: Pod "downwardapi-volume-bf6f06ce-87fc-474e-aab4-1ba3a357ed43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036199336s
Jan 23 13:52:05.028: INFO: Pod "downwardapi-volume-bf6f06ce-87fc-474e-aab4-1ba3a357ed43": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043540237s
Jan 23 13:52:07.066: INFO: Pod "downwardapi-volume-bf6f06ce-87fc-474e-aab4-1ba3a357ed43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081343099s
STEP: Saw pod success
Jan 23 13:52:07.067: INFO: Pod "downwardapi-volume-bf6f06ce-87fc-474e-aab4-1ba3a357ed43" satisfied condition "success or failure"
Jan 23 13:52:07.077: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bf6f06ce-87fc-474e-aab4-1ba3a357ed43 container client-container: 
STEP: delete the pod
Jan 23 13:52:07.184: INFO: Waiting for pod downwardapi-volume-bf6f06ce-87fc-474e-aab4-1ba3a357ed43 to disappear
Jan 23 13:52:07.191: INFO: Pod downwardapi-volume-bf6f06ce-87fc-474e-aab4-1ba3a357ed43 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:52:07.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3651" for this suite.
Jan 23 13:52:13.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:52:13.380: INFO: namespace projected-3651 deletion completed in 6.181047628s

• [SLOW TEST:14.505 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:52:13.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 23 13:52:13.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1960'
Jan 23 13:52:15.337: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 23 13:52:15.338: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan 23 13:52:17.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1960'
Jan 23 13:52:17.588: INFO: stderr: ""
Jan 23 13:52:17.588: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:52:17.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1960" for this suite.
Jan 23 13:52:23.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:52:23.863: INFO: namespace kubectl-1960 deletion completed in 6.266838386s

• [SLOW TEST:10.484 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:52:23.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 23 13:52:23.968: INFO: Waiting up to 5m0s for pod "pod-ba43c1ea-dfaf-461f-80cc-0fcba5e3d1a8" in namespace "emptydir-6591" to be "success or failure"
Jan 23 13:52:23.982: INFO: Pod "pod-ba43c1ea-dfaf-461f-80cc-0fcba5e3d1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.844818ms
Jan 23 13:52:25.991: INFO: Pod "pod-ba43c1ea-dfaf-461f-80cc-0fcba5e3d1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023253478s
Jan 23 13:52:27.999: INFO: Pod "pod-ba43c1ea-dfaf-461f-80cc-0fcba5e3d1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03098958s
Jan 23 13:52:30.006: INFO: Pod "pod-ba43c1ea-dfaf-461f-80cc-0fcba5e3d1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038092655s
Jan 23 13:52:32.016: INFO: Pod "pod-ba43c1ea-dfaf-461f-80cc-0fcba5e3d1a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047986073s
Jan 23 13:52:34.027: INFO: Pod "pod-ba43c1ea-dfaf-461f-80cc-0fcba5e3d1a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058580074s
STEP: Saw pod success
Jan 23 13:52:34.027: INFO: Pod "pod-ba43c1ea-dfaf-461f-80cc-0fcba5e3d1a8" satisfied condition "success or failure"
Jan 23 13:52:34.031: INFO: Trying to get logs from node iruya-node pod pod-ba43c1ea-dfaf-461f-80cc-0fcba5e3d1a8 container test-container: 
STEP: delete the pod
Jan 23 13:52:34.137: INFO: Waiting for pod pod-ba43c1ea-dfaf-461f-80cc-0fcba5e3d1a8 to disappear
Jan 23 13:52:34.146: INFO: Pod pod-ba43c1ea-dfaf-461f-80cc-0fcba5e3d1a8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:52:34.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6591" for this suite.
Jan 23 13:52:40.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:52:40.432: INFO: namespace emptydir-6591 deletion completed in 6.27877927s

• [SLOW TEST:16.567 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:52:40.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-0a6f4a4a-f4d6-4f2b-89cc-090cfefa3cc9
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-0a6f4a4a-f4d6-4f2b-89cc-090cfefa3cc9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:52:50.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6303" for this suite.
Jan 23 13:53:12.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:53:12.880: INFO: namespace projected-6303 deletion completed in 22.168645482s

• [SLOW TEST:32.448 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:53:12.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 23 13:56:12.254: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:12.293: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:14.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:14.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:16.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:16.313: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:18.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:18.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:20.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:20.304: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:22.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:22.301: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:24.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:24.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:26.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:26.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:28.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:28.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:30.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:30.306: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:32.294: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:32.303: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:34.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:34.301: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:36.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:36.303: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:38.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:38.304: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:40.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:40.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:42.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:42.300: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:44.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:44.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:46.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:46.303: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:48.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:48.301: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:50.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:50.303: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:52.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:52.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:54.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:54.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:56.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:56.303: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:56:58.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:56:58.304: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:00.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:00.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:02.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:02.299: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:04.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:04.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:06.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:06.303: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:08.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:08.305: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:10.295: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:10.311: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:12.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:12.301: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:14.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:14.304: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:16.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:16.303: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:18.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:18.303: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:20.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:20.307: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:22.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:22.301: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:24.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:24.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:26.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:26.304: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:28.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:28.306: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:30.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:30.306: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:32.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:32.932: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:34.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:34.305: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:36.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:36.310: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:38.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:38.387: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:40.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:40.315: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:42.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:42.303: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:44.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:44.313: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:46.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:46.302: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:48.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:48.301: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:50.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:50.303: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:52.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:52.305: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:54.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:54.301: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:56.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:56.300: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 23 13:57:58.293: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 23 13:57:58.303: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:57:58.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7456" for this suite.
Jan 23 13:58:20.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:58:20.491: INFO: namespace container-lifecycle-hook-7456 deletion completed in 22.179333404s

• [SLOW TEST:307.610 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:58:20.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 23 13:58:20.644: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 23 13:58:20.652: INFO: Waiting for terminating namespaces to be deleted...
Jan 23 13:58:20.655: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 23 13:58:20.664: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 23 13:58:20.664: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 23 13:58:20.664: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 23 13:58:20.664: INFO: 	Container weave ready: true, restart count 0
Jan 23 13:58:20.664: INFO: 	Container weave-npc ready: true, restart count 0
Jan 23 13:58:20.664: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 23 13:58:20.673: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 23 13:58:20.673: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 23 13:58:20.673: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 23 13:58:20.673: INFO: 	Container coredns ready: true, restart count 0
Jan 23 13:58:20.673: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 23 13:58:20.673: INFO: 	Container etcd ready: true, restart count 0
Jan 23 13:58:20.673: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 23 13:58:20.673: INFO: 	Container weave ready: true, restart count 0
Jan 23 13:58:20.673: INFO: 	Container weave-npc ready: true, restart count 0
Jan 23 13:58:20.673: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 23 13:58:20.673: INFO: 	Container coredns ready: true, restart count 0
Jan 23 13:58:20.673: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 23 13:58:20.673: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 23 13:58:20.673: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 23 13:58:20.673: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 23 13:58:20.673: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 23 13:58:20.673: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-11c9e517-7bf0-42c8-a454-eab9104827f3 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-11c9e517-7bf0-42c8-a454-eab9104827f3 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-11c9e517-7bf0-42c8-a454-eab9104827f3
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:58:37.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2319" for this suite.
Jan 23 13:59:07.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:59:07.245: INFO: namespace sched-pred-2319 deletion completed in 30.21505174s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:46.752 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:59:07.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 23 13:59:16.069: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:59:16.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6879" for this suite.
Jan 23 13:59:22.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:59:22.315: INFO: namespace container-runtime-6879 deletion completed in 6.187791355s

• [SLOW TEST:15.070 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:59:22.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 13:59:22.705: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 23 13:59:25.620: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:59:26.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1874" for this suite.
Jan 23 13:59:34.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:59:34.152: INFO: namespace replication-controller-1874 deletion completed in 8.09986149s

• [SLOW TEST:11.837 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:59:34.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan 23 13:59:36.223: INFO: Waiting up to 5m0s for pod "client-containers-e742021e-c8c8-449c-bbcb-a47c90ca7c17" in namespace "containers-7144" to be "success or failure"
Jan 23 13:59:36.364: INFO: Pod "client-containers-e742021e-c8c8-449c-bbcb-a47c90ca7c17": Phase="Pending", Reason="", readiness=false. Elapsed: 140.810713ms
Jan 23 13:59:38.375: INFO: Pod "client-containers-e742021e-c8c8-449c-bbcb-a47c90ca7c17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152429933s
Jan 23 13:59:40.384: INFO: Pod "client-containers-e742021e-c8c8-449c-bbcb-a47c90ca7c17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161438423s
Jan 23 13:59:42.405: INFO: Pod "client-containers-e742021e-c8c8-449c-bbcb-a47c90ca7c17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181950238s
Jan 23 13:59:44.416: INFO: Pod "client-containers-e742021e-c8c8-449c-bbcb-a47c90ca7c17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.193635296s
STEP: Saw pod success
Jan 23 13:59:44.417: INFO: Pod "client-containers-e742021e-c8c8-449c-bbcb-a47c90ca7c17" satisfied condition "success or failure"
Jan 23 13:59:44.421: INFO: Trying to get logs from node iruya-node pod client-containers-e742021e-c8c8-449c-bbcb-a47c90ca7c17 container test-container: 
STEP: delete the pod
Jan 23 13:59:44.537: INFO: Waiting for pod client-containers-e742021e-c8c8-449c-bbcb-a47c90ca7c17 to disappear
Jan 23 13:59:44.544: INFO: Pod client-containers-e742021e-c8c8-449c-bbcb-a47c90ca7c17 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 13:59:44.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7144" for this suite.
Jan 23 13:59:50.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 13:59:50.784: INFO: namespace containers-7144 deletion completed in 6.232824938s

• [SLOW TEST:16.632 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 13:59:50.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-7123
I0123 13:59:50.939217       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7123, replica count: 1
I0123 13:59:51.990512       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 13:59:52.991065       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 13:59:53.991912       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 13:59:54.992353       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 13:59:55.992715       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 13:59:56.993333       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0123 13:59:57.993902       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 23 13:59:58.160: INFO: Created: latency-svc-8j9pc
Jan 23 13:59:58.171: INFO: Got endpoints: latency-svc-8j9pc [76.9806ms]
Jan 23 13:59:58.295: INFO: Created: latency-svc-f4955
Jan 23 13:59:58.320: INFO: Got endpoints: latency-svc-f4955 [146.741685ms]
Jan 23 13:59:58.356: INFO: Created: latency-svc-5jgcz
Jan 23 13:59:58.383: INFO: Got endpoints: latency-svc-5jgcz [211.31823ms]
Jan 23 13:59:58.521: INFO: Created: latency-svc-4sjp7
Jan 23 13:59:58.563: INFO: Created: latency-svc-zsb6z
Jan 23 13:59:58.564: INFO: Got endpoints: latency-svc-4sjp7 [391.599934ms]
Jan 23 13:59:58.575: INFO: Got endpoints: latency-svc-zsb6z [401.440391ms]
Jan 23 13:59:58.724: INFO: Created: latency-svc-dz256
Jan 23 13:59:58.728: INFO: Got endpoints: latency-svc-dz256 [554.421923ms]
Jan 23 13:59:58.772: INFO: Created: latency-svc-2f798
Jan 23 13:59:58.780: INFO: Got endpoints: latency-svc-2f798 [607.538397ms]
Jan 23 13:59:58.865: INFO: Created: latency-svc-z5wxm
Jan 23 13:59:58.883: INFO: Got endpoints: latency-svc-z5wxm [709.627311ms]
Jan 23 13:59:58.916: INFO: Created: latency-svc-qxkkh
Jan 23 13:59:58.927: INFO: Got endpoints: latency-svc-qxkkh [752.93541ms]
Jan 23 13:59:58.959: INFO: Created: latency-svc-95d9k
Jan 23 13:59:59.067: INFO: Created: latency-svc-xntrq
Jan 23 13:59:59.069: INFO: Got endpoints: latency-svc-95d9k [896.061535ms]
Jan 23 13:59:59.082: INFO: Got endpoints: latency-svc-xntrq [908.187872ms]
Jan 23 13:59:59.141: INFO: Created: latency-svc-fphf2
Jan 23 13:59:59.146: INFO: Got endpoints: latency-svc-fphf2 [972.16693ms]
Jan 23 13:59:59.244: INFO: Created: latency-svc-kck4z
Jan 23 13:59:59.255: INFO: Got endpoints: latency-svc-kck4z [1.081634061s]
Jan 23 13:59:59.289: INFO: Created: latency-svc-tj8qx
Jan 23 13:59:59.303: INFO: Got endpoints: latency-svc-tj8qx [1.130085457s]
Jan 23 13:59:59.382: INFO: Created: latency-svc-dw566
Jan 23 13:59:59.392: INFO: Got endpoints: latency-svc-dw566 [1.218873247s]
Jan 23 13:59:59.433: INFO: Created: latency-svc-bfqns
Jan 23 13:59:59.459: INFO: Got endpoints: latency-svc-bfqns [1.285686556s]
Jan 23 13:59:59.467: INFO: Created: latency-svc-72jhp
Jan 23 13:59:59.477: INFO: Got endpoints: latency-svc-72jhp [1.156869332s]
Jan 23 13:59:59.553: INFO: Created: latency-svc-fhh2j
Jan 23 13:59:59.614: INFO: Got endpoints: latency-svc-fhh2j [1.231414397s]
Jan 23 13:59:59.616: INFO: Created: latency-svc-7pxt6
Jan 23 13:59:59.621: INFO: Got endpoints: latency-svc-7pxt6 [1.056914842s]
Jan 23 13:59:59.724: INFO: Created: latency-svc-nfj7z
Jan 23 13:59:59.733: INFO: Got endpoints: latency-svc-nfj7z [1.157976052s]
Jan 23 13:59:59.789: INFO: Created: latency-svc-smwvc
Jan 23 13:59:59.809: INFO: Got endpoints: latency-svc-smwvc [1.0814798s]
Jan 23 13:59:59.954: INFO: Created: latency-svc-n8dfk
Jan 23 13:59:59.957: INFO: Got endpoints: latency-svc-n8dfk [1.176415239s]
Jan 23 14:00:00.053: INFO: Created: latency-svc-j6bsz
Jan 23 14:00:00.179: INFO: Got endpoints: latency-svc-j6bsz [1.295335089s]
Jan 23 14:00:00.213: INFO: Created: latency-svc-2dkr7
Jan 23 14:00:00.219: INFO: Got endpoints: latency-svc-2dkr7 [1.291978898s]
Jan 23 14:00:00.262: INFO: Created: latency-svc-9srhb
Jan 23 14:00:00.349: INFO: Got endpoints: latency-svc-9srhb [1.279099033s]
Jan 23 14:00:00.350: INFO: Created: latency-svc-8c49p
Jan 23 14:00:00.356: INFO: Got endpoints: latency-svc-8c49p [1.273629778s]
Jan 23 14:00:00.434: INFO: Created: latency-svc-tvvvz
Jan 23 14:00:00.439: INFO: Got endpoints: latency-svc-tvvvz [1.292988893s]
Jan 23 14:00:00.591: INFO: Created: latency-svc-vkkw9
Jan 23 14:00:00.633: INFO: Got endpoints: latency-svc-vkkw9 [1.377707575s]
Jan 23 14:00:00.636: INFO: Created: latency-svc-6d986
Jan 23 14:00:00.652: INFO: Got endpoints: latency-svc-6d986 [1.34910147s]
Jan 23 14:00:00.778: INFO: Created: latency-svc-kvf8l
Jan 23 14:00:00.794: INFO: Got endpoints: latency-svc-kvf8l [1.401631212s]
Jan 23 14:00:01.018: INFO: Created: latency-svc-7bp7p
Jan 23 14:00:01.043: INFO: Got endpoints: latency-svc-7bp7p [1.583932212s]
Jan 23 14:00:01.089: INFO: Created: latency-svc-4dpc4
Jan 23 14:00:01.105: INFO: Got endpoints: latency-svc-4dpc4 [1.627821202s]
Jan 23 14:00:01.236: INFO: Created: latency-svc-tlttx
Jan 23 14:00:01.252: INFO: Got endpoints: latency-svc-tlttx [1.637619886s]
Jan 23 14:00:01.353: INFO: Created: latency-svc-99ckz
Jan 23 14:00:01.358: INFO: Got endpoints: latency-svc-99ckz [1.736545394s]
Jan 23 14:00:01.419: INFO: Created: latency-svc-9fj87
Jan 23 14:00:01.434: INFO: Got endpoints: latency-svc-9fj87 [1.700315546s]
Jan 23 14:00:01.536: INFO: Created: latency-svc-mgz8g
Jan 23 14:00:01.543: INFO: Got endpoints: latency-svc-mgz8g [1.733459783s]
Jan 23 14:00:01.584: INFO: Created: latency-svc-f7kjs
Jan 23 14:00:01.607: INFO: Got endpoints: latency-svc-f7kjs [1.649571735s]
Jan 23 14:00:01.690: INFO: Created: latency-svc-tfmz6
Jan 23 14:00:01.697: INFO: Got endpoints: latency-svc-tfmz6 [1.518360693s]
Jan 23 14:00:01.752: INFO: Created: latency-svc-sxhpn
Jan 23 14:00:01.766: INFO: Got endpoints: latency-svc-sxhpn [1.546512241s]
Jan 23 14:00:01.965: INFO: Created: latency-svc-f6ppv
Jan 23 14:00:01.974: INFO: Got endpoints: latency-svc-f6ppv [1.624453235s]
Jan 23 14:00:02.092: INFO: Created: latency-svc-56n5h
Jan 23 14:00:02.156: INFO: Got endpoints: latency-svc-56n5h [1.79975195s]
Jan 23 14:00:02.159: INFO: Created: latency-svc-kpbcw
Jan 23 14:00:02.160: INFO: Got endpoints: latency-svc-kpbcw [1.720998161s]
Jan 23 14:00:02.252: INFO: Created: latency-svc-wpcc9
Jan 23 14:00:02.263: INFO: Got endpoints: latency-svc-wpcc9 [1.627865701s]
Jan 23 14:00:02.323: INFO: Created: latency-svc-2zqx9
Jan 23 14:00:02.337: INFO: Got endpoints: latency-svc-2zqx9 [1.684874566s]
Jan 23 14:00:02.536: INFO: Created: latency-svc-m2j96
Jan 23 14:00:02.633: INFO: Got endpoints: latency-svc-m2j96 [1.838992691s]
Jan 23 14:00:02.674: INFO: Created: latency-svc-pw8mc
Jan 23 14:00:02.685: INFO: Got endpoints: latency-svc-pw8mc [1.641204152s]
Jan 23 14:00:02.826: INFO: Created: latency-svc-8w8h5
Jan 23 14:00:02.867: INFO: Got endpoints: latency-svc-8w8h5 [1.760974341s]
Jan 23 14:00:02.917: INFO: Created: latency-svc-ktvl8
Jan 23 14:00:03.034: INFO: Got endpoints: latency-svc-ktvl8 [1.780919213s]
Jan 23 14:00:03.093: INFO: Created: latency-svc-mdcjw
Jan 23 14:00:03.128: INFO: Got endpoints: latency-svc-mdcjw [261.172366ms]
Jan 23 14:00:03.254: INFO: Created: latency-svc-vsh69
Jan 23 14:00:03.273: INFO: Got endpoints: latency-svc-vsh69 [1.914407279s]
Jan 23 14:00:03.340: INFO: Created: latency-svc-mdfw6
Jan 23 14:00:03.344: INFO: Got endpoints: latency-svc-mdfw6 [1.910299128s]
Jan 23 14:00:03.471: INFO: Created: latency-svc-chwqg
Jan 23 14:00:03.476: INFO: Got endpoints: latency-svc-chwqg [1.932895699s]
Jan 23 14:00:03.599: INFO: Created: latency-svc-25chs
Jan 23 14:00:03.614: INFO: Got endpoints: latency-svc-25chs [2.007083414s]
Jan 23 14:00:03.792: INFO: Created: latency-svc-jtgpj
Jan 23 14:00:03.936: INFO: Got endpoints: latency-svc-jtgpj [2.237961732s]
Jan 23 14:00:03.939: INFO: Created: latency-svc-z5htw
Jan 23 14:00:03.958: INFO: Got endpoints: latency-svc-z5htw [2.192510307s]
Jan 23 14:00:04.037: INFO: Created: latency-svc-6r7q6
Jan 23 14:00:04.124: INFO: Got endpoints: latency-svc-6r7q6 [2.14958091s]
Jan 23 14:00:04.164: INFO: Created: latency-svc-vfktg
Jan 23 14:00:04.291: INFO: Got endpoints: latency-svc-vfktg [2.134729541s]
Jan 23 14:00:04.334: INFO: Created: latency-svc-vg6kw
Jan 23 14:00:04.371: INFO: Got endpoints: latency-svc-vg6kw [2.210675936s]
Jan 23 14:00:04.372: INFO: Created: latency-svc-68ss4
Jan 23 14:00:04.439: INFO: Got endpoints: latency-svc-68ss4 [2.175860394s]
Jan 23 14:00:04.483: INFO: Created: latency-svc-pdxj4
Jan 23 14:00:04.483: INFO: Got endpoints: latency-svc-pdxj4 [2.145129076s]
Jan 23 14:00:04.544: INFO: Created: latency-svc-8mwxf
Jan 23 14:00:04.660: INFO: Got endpoints: latency-svc-8mwxf [2.0264991s]
Jan 23 14:00:04.673: INFO: Created: latency-svc-cdn4z
Jan 23 14:00:04.678: INFO: Got endpoints: latency-svc-cdn4z [1.993319723s]
Jan 23 14:00:04.718: INFO: Created: latency-svc-f59gv
Jan 23 14:00:04.742: INFO: Got endpoints: latency-svc-f59gv [1.707673281s]
Jan 23 14:00:04.919: INFO: Created: latency-svc-vb68c
Jan 23 14:00:04.928: INFO: Got endpoints: latency-svc-vb68c [1.798947385s]
Jan 23 14:00:05.003: INFO: Created: latency-svc-zj6ls
Jan 23 14:00:05.155: INFO: Got endpoints: latency-svc-zj6ls [1.881892475s]
Jan 23 14:00:05.199: INFO: Created: latency-svc-rbj86
Jan 23 14:00:05.213: INFO: Got endpoints: latency-svc-rbj86 [1.868876834s]
Jan 23 14:00:05.249: INFO: Created: latency-svc-x4wpz
Jan 23 14:00:06.192: INFO: Got endpoints: latency-svc-x4wpz [2.716073668s]
Jan 23 14:00:06.218: INFO: Created: latency-svc-r4sdx
Jan 23 14:00:06.231: INFO: Got endpoints: latency-svc-r4sdx [2.616771281s]
Jan 23 14:00:06.267: INFO: Created: latency-svc-8jrvw
Jan 23 14:00:06.281: INFO: Got endpoints: latency-svc-8jrvw [2.344151247s]
Jan 23 14:00:06.375: INFO: Created: latency-svc-kmmb8
Jan 23 14:00:06.384: INFO: Got endpoints: latency-svc-kmmb8 [2.424728178s]
Jan 23 14:00:06.413: INFO: Created: latency-svc-6v2xr
Jan 23 14:00:06.422: INFO: Got endpoints: latency-svc-6v2xr [2.298302852s]
Jan 23 14:00:06.453: INFO: Created: latency-svc-nv9jl
Jan 23 14:00:06.553: INFO: Got endpoints: latency-svc-nv9jl [2.261973879s]
Jan 23 14:00:06.565: INFO: Created: latency-svc-n2kr7
Jan 23 14:00:06.621: INFO: Got endpoints: latency-svc-n2kr7 [2.249823963s]
Jan 23 14:00:06.641: INFO: Created: latency-svc-kt96m
Jan 23 14:00:06.805: INFO: Created: latency-svc-b8kwq
Jan 23 14:00:06.806: INFO: Got endpoints: latency-svc-kt96m [2.36582814s]
Jan 23 14:00:06.826: INFO: Got endpoints: latency-svc-b8kwq [2.343225533s]
Jan 23 14:00:06.898: INFO: Created: latency-svc-dt7hp
Jan 23 14:00:07.073: INFO: Got endpoints: latency-svc-dt7hp [2.412426218s]
Jan 23 14:00:07.083: INFO: Created: latency-svc-p2htj
Jan 23 14:00:07.090: INFO: Got endpoints: latency-svc-p2htj [2.411155907s]
Jan 23 14:00:07.142: INFO: Created: latency-svc-g5nrx
Jan 23 14:00:07.208: INFO: Got endpoints: latency-svc-g5nrx [2.46601869s]
Jan 23 14:00:07.242: INFO: Created: latency-svc-g6scd
Jan 23 14:00:07.253: INFO: Got endpoints: latency-svc-g6scd [2.325284936s]
Jan 23 14:00:07.290: INFO: Created: latency-svc-m4h6t
Jan 23 14:00:07.302: INFO: Got endpoints: latency-svc-m4h6t [2.146875838s]
Jan 23 14:00:07.390: INFO: Created: latency-svc-xfnbf
Jan 23 14:00:07.398: INFO: Got endpoints: latency-svc-xfnbf [2.184791257s]
Jan 23 14:00:07.430: INFO: Created: latency-svc-gsj2d
Jan 23 14:00:07.463: INFO: Got endpoints: latency-svc-gsj2d [1.270016177s]
Jan 23 14:00:07.585: INFO: Created: latency-svc-b26g8
Jan 23 14:00:07.609: INFO: Got endpoints: latency-svc-b26g8 [1.377631293s]
Jan 23 14:00:07.664: INFO: Created: latency-svc-g249t
Jan 23 14:00:07.664: INFO: Got endpoints: latency-svc-g249t [1.38298409s]
Jan 23 14:00:07.688: INFO: Created: latency-svc-82tss
Jan 23 14:00:07.748: INFO: Got endpoints: latency-svc-82tss [1.363653926s]
Jan 23 14:00:07.822: INFO: Created: latency-svc-b6xpg
Jan 23 14:00:08.016: INFO: Created: latency-svc-bq8xg
Jan 23 14:00:08.018: INFO: Got endpoints: latency-svc-b6xpg [1.59533133s]
Jan 23 14:00:08.033: INFO: Got endpoints: latency-svc-bq8xg [1.478852539s]
Jan 23 14:00:08.080: INFO: Created: latency-svc-dvtph
Jan 23 14:00:08.253: INFO: Created: latency-svc-9dz9q
Jan 23 14:00:08.254: INFO: Got endpoints: latency-svc-dvtph [1.631388336s]
Jan 23 14:00:08.263: INFO: Got endpoints: latency-svc-9dz9q [1.457124012s]
Jan 23 14:00:08.294: INFO: Created: latency-svc-88xfq
Jan 23 14:00:08.310: INFO: Got endpoints: latency-svc-88xfq [1.48418043s]
Jan 23 14:00:08.357: INFO: Created: latency-svc-94hvd
Jan 23 14:00:08.481: INFO: Got endpoints: latency-svc-94hvd [1.406864405s]
Jan 23 14:00:08.488: INFO: Created: latency-svc-9mssk
Jan 23 14:00:08.515: INFO: Got endpoints: latency-svc-9mssk [1.425261923s]
Jan 23 14:00:08.582: INFO: Created: latency-svc-xcd4l
Jan 23 14:00:08.680: INFO: Got endpoints: latency-svc-xcd4l [1.472054558s]
Jan 23 14:00:08.695: INFO: Created: latency-svc-lpgwf
Jan 23 14:00:08.706: INFO: Got endpoints: latency-svc-lpgwf [1.452513394s]
Jan 23 14:00:08.751: INFO: Created: latency-svc-gsgxn
Jan 23 14:00:08.760: INFO: Got endpoints: latency-svc-gsgxn [1.457493913s]
Jan 23 14:00:08.900: INFO: Created: latency-svc-ml62k
Jan 23 14:00:08.915: INFO: Got endpoints: latency-svc-ml62k [1.51620908s]
Jan 23 14:00:08.954: INFO: Created: latency-svc-qfn9x
Jan 23 14:00:08.969: INFO: Got endpoints: latency-svc-qfn9x [1.505674272s]
Jan 23 14:00:09.094: INFO: Created: latency-svc-q6tbv
Jan 23 14:00:09.109: INFO: Got endpoints: latency-svc-q6tbv [1.499692077s]
Jan 23 14:00:09.148: INFO: Created: latency-svc-74bbk
Jan 23 14:00:09.152: INFO: Got endpoints: latency-svc-74bbk [1.488179272s]
Jan 23 14:00:09.200: INFO: Created: latency-svc-h8bg2
Jan 23 14:00:09.383: INFO: Got endpoints: latency-svc-h8bg2 [1.634696136s]
Jan 23 14:00:09.414: INFO: Created: latency-svc-6r64z
Jan 23 14:00:09.416: INFO: Got endpoints: latency-svc-6r64z [1.397942455s]
Jan 23 14:00:09.454: INFO: Created: latency-svc-269pg
Jan 23 14:00:09.461: INFO: Got endpoints: latency-svc-269pg [1.428499473s]
Jan 23 14:00:09.604: INFO: Created: latency-svc-rk75x
Jan 23 14:00:09.621: INFO: Got endpoints: latency-svc-rk75x [1.366999806s]
Jan 23 14:00:09.682: INFO: Created: latency-svc-7rhtn
Jan 23 14:00:09.682: INFO: Got endpoints: latency-svc-7rhtn [1.418916464s]
Jan 23 14:00:09.828: INFO: Created: latency-svc-j9qbn
Jan 23 14:00:09.864: INFO: Created: latency-svc-6x2cl
Jan 23 14:00:09.867: INFO: Got endpoints: latency-svc-j9qbn [1.556122775s]
Jan 23 14:00:09.875: INFO: Got endpoints: latency-svc-6x2cl [1.393228154s]
Jan 23 14:00:09.918: INFO: Created: latency-svc-nrr5l
Jan 23 14:00:10.047: INFO: Got endpoints: latency-svc-nrr5l [1.531556902s]
Jan 23 14:00:10.083: INFO: Created: latency-svc-kvdpn
Jan 23 14:00:10.099: INFO: Got endpoints: latency-svc-kvdpn [1.417618005s]
Jan 23 14:00:10.155: INFO: Created: latency-svc-jj7rv
Jan 23 14:00:10.280: INFO: Got endpoints: latency-svc-jj7rv [1.573962481s]
Jan 23 14:00:10.304: INFO: Created: latency-svc-zb6hq
Jan 23 14:00:10.314: INFO: Got endpoints: latency-svc-zb6hq [1.553649108s]
Jan 23 14:00:10.351: INFO: Created: latency-svc-hqmns
Jan 23 14:00:10.351: INFO: Got endpoints: latency-svc-hqmns [1.435840603s]
Jan 23 14:00:10.529: INFO: Created: latency-svc-jmfcn
Jan 23 14:00:10.563: INFO: Got endpoints: latency-svc-jmfcn [1.594122264s]
Jan 23 14:00:10.606: INFO: Created: latency-svc-hpsjn
Jan 23 14:00:10.609: INFO: Got endpoints: latency-svc-hpsjn [1.499570476s]
Jan 23 14:00:10.761: INFO: Created: latency-svc-dcqqc
Jan 23 14:00:10.806: INFO: Got endpoints: latency-svc-dcqqc [1.653837918s]
Jan 23 14:00:10.815: INFO: Created: latency-svc-5kh8s
Jan 23 14:00:10.824: INFO: Got endpoints: latency-svc-5kh8s [1.440405794s]
Jan 23 14:00:11.119: INFO: Created: latency-svc-lmx9j
Jan 23 14:00:11.161: INFO: Got endpoints: latency-svc-lmx9j [1.744785069s]
Jan 23 14:00:11.168: INFO: Created: latency-svc-dc22g
Jan 23 14:00:11.179: INFO: Got endpoints: latency-svc-dc22g [1.717288522s]
Jan 23 14:00:11.321: INFO: Created: latency-svc-rnk7z
Jan 23 14:00:11.376: INFO: Created: latency-svc-pcnxr
Jan 23 14:00:11.377: INFO: Got endpoints: latency-svc-rnk7z [1.756197836s]
Jan 23 14:00:11.533: INFO: Got endpoints: latency-svc-pcnxr [1.850655037s]
Jan 23 14:00:11.578: INFO: Created: latency-svc-f5cgl
Jan 23 14:00:11.598: INFO: Got endpoints: latency-svc-f5cgl [1.731613735s]
Jan 23 14:00:11.635: INFO: Created: latency-svc-jrpx5
Jan 23 14:00:11.721: INFO: Got endpoints: latency-svc-jrpx5 [1.846257347s]
Jan 23 14:00:11.742: INFO: Created: latency-svc-vbp6h
Jan 23 14:00:11.747: INFO: Got endpoints: latency-svc-vbp6h [1.698889467s]
Jan 23 14:00:11.785: INFO: Created: latency-svc-9cfg8
Jan 23 14:00:11.800: INFO: Got endpoints: latency-svc-9cfg8 [1.701105213s]
Jan 23 14:00:11.993: INFO: Created: latency-svc-7rtfd
Jan 23 14:00:12.012: INFO: Got endpoints: latency-svc-7rtfd [1.731170616s]
Jan 23 14:00:12.061: INFO: Created: latency-svc-q44g7
Jan 23 14:00:12.086: INFO: Got endpoints: latency-svc-q44g7 [1.772276879s]
Jan 23 14:00:12.210: INFO: Created: latency-svc-2wbr2
Jan 23 14:00:12.226: INFO: Got endpoints: latency-svc-2wbr2 [1.874210768s]
Jan 23 14:00:12.270: INFO: Created: latency-svc-vmsfd
Jan 23 14:00:12.271: INFO: Got endpoints: latency-svc-vmsfd [1.706764871s]
Jan 23 14:00:12.401: INFO: Created: latency-svc-dmcqv
Jan 23 14:00:12.404: INFO: Got endpoints: latency-svc-dmcqv [1.795222271s]
Jan 23 14:00:12.478: INFO: Created: latency-svc-cxmf6
Jan 23 14:00:12.484: INFO: Got endpoints: latency-svc-cxmf6 [1.676856036s]
Jan 23 14:00:12.585: INFO: Created: latency-svc-b4szx
Jan 23 14:00:12.593: INFO: Got endpoints: latency-svc-b4szx [1.768579066s]
Jan 23 14:00:12.635: INFO: Created: latency-svc-k8qfb
Jan 23 14:00:12.655: INFO: Got endpoints: latency-svc-k8qfb [1.493115183s]
Jan 23 14:00:12.682: INFO: Created: latency-svc-wvkjr
Jan 23 14:00:12.771: INFO: Got endpoints: latency-svc-wvkjr [1.591661961s]
Jan 23 14:00:12.784: INFO: Created: latency-svc-pl7w8
Jan 23 14:00:12.795: INFO: Got endpoints: latency-svc-pl7w8 [1.417436333s]
Jan 23 14:00:12.827: INFO: Created: latency-svc-fgspx
Jan 23 14:00:12.836: INFO: Got endpoints: latency-svc-fgspx [1.302756427s]
Jan 23 14:00:12.874: INFO: Created: latency-svc-gz77t
Jan 23 14:00:12.997: INFO: Got endpoints: latency-svc-gz77t [1.397803882s]
Jan 23 14:00:13.007: INFO: Created: latency-svc-8mqr6
Jan 23 14:00:13.019: INFO: Got endpoints: latency-svc-8mqr6 [1.297838747s]
Jan 23 14:00:13.058: INFO: Created: latency-svc-v2jdv
Jan 23 14:00:13.193: INFO: Got endpoints: latency-svc-v2jdv [1.446456339s]
Jan 23 14:00:13.199: INFO: Created: latency-svc-rcpjs
Jan 23 14:00:13.222: INFO: Got endpoints: latency-svc-rcpjs [1.422105473s]
Jan 23 14:00:13.248: INFO: Created: latency-svc-tbxtt
Jan 23 14:00:13.257: INFO: Got endpoints: latency-svc-tbxtt [1.244657317s]
Jan 23 14:00:13.364: INFO: Created: latency-svc-wjxg6
Jan 23 14:00:13.375: INFO: Got endpoints: latency-svc-wjxg6 [1.288057274s]
Jan 23 14:00:13.434: INFO: Created: latency-svc-g76kx
Jan 23 14:00:13.441: INFO: Got endpoints: latency-svc-g76kx [1.214979502s]
Jan 23 14:00:13.679: INFO: Created: latency-svc-drgs6
Jan 23 14:00:13.706: INFO: Got endpoints: latency-svc-drgs6 [1.435249514s]
Jan 23 14:00:13.871: INFO: Created: latency-svc-jc6k8
Jan 23 14:00:13.877: INFO: Got endpoints: latency-svc-jc6k8 [1.472370102s]
Jan 23 14:00:14.077: INFO: Created: latency-svc-w5rnq
Jan 23 14:00:14.094: INFO: Got endpoints: latency-svc-w5rnq [1.609279783s]
Jan 23 14:00:14.304: INFO: Created: latency-svc-9q8mj
Jan 23 14:00:14.318: INFO: Got endpoints: latency-svc-9q8mj [1.724952981s]
Jan 23 14:00:14.387: INFO: Created: latency-svc-zc7th
Jan 23 14:00:14.533: INFO: Got endpoints: latency-svc-zc7th [1.877982073s]
Jan 23 14:00:14.543: INFO: Created: latency-svc-qwk59
Jan 23 14:00:14.558: INFO: Got endpoints: latency-svc-qwk59 [1.787207964s]
Jan 23 14:00:14.735: INFO: Created: latency-svc-rw4st
Jan 23 14:00:14.755: INFO: Got endpoints: latency-svc-rw4st [1.96008429s]
Jan 23 14:00:14.817: INFO: Created: latency-svc-9l5sr
Jan 23 14:00:14.943: INFO: Got endpoints: latency-svc-9l5sr [2.107115748s]
Jan 23 14:00:14.989: INFO: Created: latency-svc-htbkc
Jan 23 14:00:15.122: INFO: Got endpoints: latency-svc-htbkc [2.125661621s]
Jan 23 14:00:15.124: INFO: Created: latency-svc-ssvrh
Jan 23 14:00:15.132: INFO: Got endpoints: latency-svc-ssvrh [2.112464791s]
Jan 23 14:00:15.198: INFO: Created: latency-svc-2kgh4
Jan 23 14:00:15.214: INFO: Got endpoints: latency-svc-2kgh4 [2.020219079s]
Jan 23 14:00:15.294: INFO: Created: latency-svc-sddxz
Jan 23 14:00:15.307: INFO: Got endpoints: latency-svc-sddxz [2.084032822s]
Jan 23 14:00:15.479: INFO: Created: latency-svc-x4kwb
Jan 23 14:00:15.511: INFO: Got endpoints: latency-svc-x4kwb [2.253831555s]
Jan 23 14:00:15.539: INFO: Created: latency-svc-f4whr
Jan 23 14:00:15.567: INFO: Got endpoints: latency-svc-f4whr [2.192641169s]
Jan 23 14:00:15.679: INFO: Created: latency-svc-xxxhx
Jan 23 14:00:15.691: INFO: Got endpoints: latency-svc-xxxhx [2.249414677s]
Jan 23 14:00:15.723: INFO: Created: latency-svc-hk2nc
Jan 23 14:00:15.728: INFO: Got endpoints: latency-svc-hk2nc [2.021475284s]
Jan 23 14:00:15.762: INFO: Created: latency-svc-6wmr5
Jan 23 14:00:15.767: INFO: Got endpoints: latency-svc-6wmr5 [1.890034092s]
Jan 23 14:00:15.871: INFO: Created: latency-svc-zkqwv
Jan 23 14:00:15.875: INFO: Got endpoints: latency-svc-zkqwv [1.780775288s]
Jan 23 14:00:15.939: INFO: Created: latency-svc-v5h44
Jan 23 14:00:16.021: INFO: Got endpoints: latency-svc-v5h44 [1.702545482s]
Jan 23 14:00:16.032: INFO: Created: latency-svc-fsfbk
Jan 23 14:00:16.036: INFO: Got endpoints: latency-svc-fsfbk [1.503040255s]
Jan 23 14:00:16.079: INFO: Created: latency-svc-7mjc6
Jan 23 14:00:16.094: INFO: Got endpoints: latency-svc-7mjc6 [1.535464999s]
Jan 23 14:00:16.199: INFO: Created: latency-svc-2xdbg
Jan 23 14:00:16.208: INFO: Got endpoints: latency-svc-2xdbg [1.45208983s]
Jan 23 14:00:16.255: INFO: Created: latency-svc-m7wtq
Jan 23 14:00:16.267: INFO: Got endpoints: latency-svc-m7wtq [1.323188045s]
Jan 23 14:00:16.365: INFO: Created: latency-svc-6l8nw
Jan 23 14:00:16.381: INFO: Got endpoints: latency-svc-6l8nw [1.258265052s]
Jan 23 14:00:16.415: INFO: Created: latency-svc-z9zlq
Jan 23 14:00:16.456: INFO: Got endpoints: latency-svc-z9zlq [1.324613457s]
Jan 23 14:00:16.468: INFO: Created: latency-svc-9v5lb
Jan 23 14:00:16.559: INFO: Got endpoints: latency-svc-9v5lb [1.34540043s]
Jan 23 14:00:16.571: INFO: Created: latency-svc-6mvll
Jan 23 14:00:16.601: INFO: Got endpoints: latency-svc-6mvll [1.294030982s]
Jan 23 14:00:16.656: INFO: Created: latency-svc-vkn9v
Jan 23 14:00:16.657: INFO: Got endpoints: latency-svc-vkn9v [1.145695884s]
Jan 23 14:00:16.737: INFO: Created: latency-svc-xb7sz
Jan 23 14:00:16.745: INFO: Got endpoints: latency-svc-xb7sz [1.177654351s]
Jan 23 14:00:16.789: INFO: Created: latency-svc-p4crp
Jan 23 14:00:16.806: INFO: Got endpoints: latency-svc-p4crp [1.115315354s]
Jan 23 14:00:16.983: INFO: Created: latency-svc-nxnts
Jan 23 14:00:16.989: INFO: Got endpoints: latency-svc-nxnts [1.260972499s]
Jan 23 14:00:17.028: INFO: Created: latency-svc-tx6bc
Jan 23 14:00:17.030: INFO: Got endpoints: latency-svc-tx6bc [1.263037995s]
Jan 23 14:00:17.066: INFO: Created: latency-svc-hcdgb
Jan 23 14:00:17.177: INFO: Got endpoints: latency-svc-hcdgb [1.301711843s]
Jan 23 14:00:17.203: INFO: Created: latency-svc-mfbg2
Jan 23 14:00:17.205: INFO: Got endpoints: latency-svc-mfbg2 [1.183843555s]
Jan 23 14:00:17.242: INFO: Created: latency-svc-7xlfh
Jan 23 14:00:17.251: INFO: Got endpoints: latency-svc-7xlfh [1.214427498s]
Jan 23 14:00:17.354: INFO: Created: latency-svc-n4ptz
Jan 23 14:00:17.383: INFO: Got endpoints: latency-svc-n4ptz [1.288215496s]
Jan 23 14:00:17.415: INFO: Created: latency-svc-wdh6s
Jan 23 14:00:17.434: INFO: Got endpoints: latency-svc-wdh6s [1.225395738s]
Jan 23 14:00:17.521: INFO: Created: latency-svc-nv74j
Jan 23 14:00:17.555: INFO: Got endpoints: latency-svc-nv74j [1.288031269s]
Jan 23 14:00:17.600: INFO: Created: latency-svc-pf2gr
Jan 23 14:00:17.612: INFO: Got endpoints: latency-svc-pf2gr [1.230040132s]
Jan 23 14:00:17.713: INFO: Created: latency-svc-7vwdc
Jan 23 14:00:17.717: INFO: Got endpoints: latency-svc-7vwdc [1.260481607s]
Jan 23 14:00:17.768: INFO: Created: latency-svc-42qgv
Jan 23 14:00:17.768: INFO: Got endpoints: latency-svc-42qgv [1.208860785s]
Jan 23 14:00:17.870: INFO: Created: latency-svc-ksgbr
Jan 23 14:00:17.900: INFO: Got endpoints: latency-svc-ksgbr [1.299120047s]
Jan 23 14:00:17.928: INFO: Created: latency-svc-pbcpq
Jan 23 14:00:17.944: INFO: Got endpoints: latency-svc-pbcpq [1.286942445s]
Jan 23 14:00:17.971: INFO: Created: latency-svc-rjvmw
Jan 23 14:00:18.030: INFO: Got endpoints: latency-svc-rjvmw [1.283969872s]
Jan 23 14:00:18.074: INFO: Created: latency-svc-rfgpg
Jan 23 14:00:18.078: INFO: Got endpoints: latency-svc-rfgpg [1.271852031s]
Jan 23 14:00:18.138: INFO: Created: latency-svc-ct76d
Jan 23 14:00:18.221: INFO: Created: latency-svc-vf8lv
Jan 23 14:00:18.230: INFO: Got endpoints: latency-svc-ct76d [1.24062211s]
Jan 23 14:00:18.236: INFO: Got endpoints: latency-svc-vf8lv [1.205480952s]
Jan 23 14:00:18.281: INFO: Created: latency-svc-2jsq8
Jan 23 14:00:18.282: INFO: Got endpoints: latency-svc-2jsq8 [1.104839326s]
Jan 23 14:00:18.391: INFO: Created: latency-svc-hrklv
Jan 23 14:00:18.396: INFO: Got endpoints: latency-svc-hrklv [1.190299823s]
Jan 23 14:00:18.465: INFO: Created: latency-svc-c55xv
Jan 23 14:00:18.558: INFO: Got endpoints: latency-svc-c55xv [1.307015413s]
Jan 23 14:00:18.567: INFO: Created: latency-svc-zvwvt
Jan 23 14:00:18.597: INFO: Created: latency-svc-x4tnp
Jan 23 14:00:18.597: INFO: Got endpoints: latency-svc-zvwvt [1.213704383s]
Jan 23 14:00:18.611: INFO: Got endpoints: latency-svc-x4tnp [1.176525564s]
Jan 23 14:00:18.663: INFO: Created: latency-svc-ct975
Jan 23 14:00:18.808: INFO: Got endpoints: latency-svc-ct975 [1.252048468s]
Jan 23 14:00:18.839: INFO: Created: latency-svc-vlftd
Jan 23 14:00:18.845: INFO: Got endpoints: latency-svc-vlftd [1.232818978s]
Jan 23 14:00:18.992: INFO: Created: latency-svc-td9m6
Jan 23 14:00:19.040: INFO: Got endpoints: latency-svc-td9m6 [1.322365898s]
Jan 23 14:00:19.055: INFO: Created: latency-svc-gnplk
Jan 23 14:00:19.149: INFO: Got endpoints: latency-svc-gnplk [1.380892359s]
Jan 23 14:00:19.167: INFO: Created: latency-svc-rgwmd
Jan 23 14:00:19.224: INFO: Got endpoints: latency-svc-rgwmd [1.323548934s]
Jan 23 14:00:19.231: INFO: Created: latency-svc-pgctw
Jan 23 14:00:19.248: INFO: Got endpoints: latency-svc-pgctw [1.30435765s]
Jan 23 14:00:19.252: INFO: Created: latency-svc-kwp2z
Jan 23 14:00:19.376: INFO: Got endpoints: latency-svc-kwp2z [1.345973677s]
Jan 23 14:00:19.405: INFO: Created: latency-svc-87dw8
Jan 23 14:00:19.406: INFO: Got endpoints: latency-svc-87dw8 [1.326816256s]
Jan 23 14:00:19.406: INFO: Latencies: [146.741685ms 211.31823ms 261.172366ms 391.599934ms 401.440391ms 554.421923ms 607.538397ms 709.627311ms 752.93541ms 896.061535ms 908.187872ms 972.16693ms 1.056914842s 1.0814798s 1.081634061s 1.104839326s 1.115315354s 1.130085457s 1.145695884s 1.156869332s 1.157976052s 1.176415239s 1.176525564s 1.177654351s 1.183843555s 1.190299823s 1.205480952s 1.208860785s 1.213704383s 1.214427498s 1.214979502s 1.218873247s 1.225395738s 1.230040132s 1.231414397s 1.232818978s 1.24062211s 1.244657317s 1.252048468s 1.258265052s 1.260481607s 1.260972499s 1.263037995s 1.270016177s 1.271852031s 1.273629778s 1.279099033s 1.283969872s 1.285686556s 1.286942445s 1.288031269s 1.288057274s 1.288215496s 1.291978898s 1.292988893s 1.294030982s 1.295335089s 1.297838747s 1.299120047s 1.301711843s 1.302756427s 1.30435765s 1.307015413s 1.322365898s 1.323188045s 1.323548934s 1.324613457s 1.326816256s 1.34540043s 1.345973677s 1.34910147s 1.363653926s 1.366999806s 1.377631293s 1.377707575s 1.380892359s 1.38298409s 1.393228154s 1.397803882s 1.397942455s 1.401631212s 1.406864405s 1.417436333s 1.417618005s 1.418916464s 1.422105473s 1.425261923s 1.428499473s 1.435249514s 1.435840603s 1.440405794s 1.446456339s 1.45208983s 1.452513394s 1.457124012s 1.457493913s 1.472054558s 1.472370102s 1.478852539s 1.48418043s 1.488179272s 1.493115183s 1.499570476s 1.499692077s 1.503040255s 1.505674272s 1.51620908s 1.518360693s 1.531556902s 1.535464999s 1.546512241s 1.553649108s 1.556122775s 1.573962481s 1.583932212s 1.591661961s 1.594122264s 1.59533133s 1.609279783s 1.624453235s 1.627821202s 1.627865701s 1.631388336s 1.634696136s 1.637619886s 1.641204152s 1.649571735s 1.653837918s 1.676856036s 1.684874566s 1.698889467s 1.700315546s 1.701105213s 1.702545482s 1.706764871s 1.707673281s 1.717288522s 1.720998161s 1.724952981s 1.731170616s 1.731613735s 1.733459783s 1.736545394s 1.744785069s 1.756197836s 1.760974341s 1.768579066s 1.772276879s 1.780775288s 1.780919213s 1.787207964s 1.795222271s 1.798947385s 1.79975195s 1.838992691s 1.846257347s 1.850655037s 1.868876834s 1.874210768s 1.877982073s 1.881892475s 1.890034092s 1.910299128s 1.914407279s 1.932895699s 1.96008429s 1.993319723s 2.007083414s 2.020219079s 2.021475284s 2.0264991s 2.084032822s 2.107115748s 2.112464791s 2.125661621s 2.134729541s 2.145129076s 2.146875838s 2.14958091s 2.175860394s 2.184791257s 2.192510307s 2.192641169s 2.210675936s 2.237961732s 2.249414677s 2.249823963s 2.253831555s 2.261973879s 2.298302852s 2.325284936s 2.343225533s 2.344151247s 2.36582814s 2.411155907s 2.412426218s 2.424728178s 2.46601869s 2.616771281s 2.716073668s]
Jan 23 14:00:19.407: INFO: 50 %ile: 1.488179272s
Jan 23 14:00:19.407: INFO: 90 %ile: 2.184791257s
Jan 23 14:00:19.407: INFO: 99 %ile: 2.616771281s
Jan 23 14:00:19.407: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:00:19.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7123" for this suite.
Jan 23 14:00:55.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:00:55.585: INFO: namespace svc-latency-7123 deletion completed in 36.17069756s

• [SLOW TEST:64.801 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:00:55.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-23b774ed-d334-4b30-8215-f09248f88d3e
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:00:55.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7300" for this suite.
Jan 23 14:01:01.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:01:01.904: INFO: namespace configmap-7300 deletion completed in 6.200850632s

• [SLOW TEST:6.318 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:01:01.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7066.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7066.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7066.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7066.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7066.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7066.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7066.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7066.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7066.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7066.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7066.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7066.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7066.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 193.94.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.94.193_udp@PTR;check="$$(dig +tcp +noall +answer +search 193.94.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.94.193_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7066.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7066.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7066.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7066.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7066.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7066.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7066.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7066.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7066.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7066.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7066.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7066.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7066.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 193.94.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.94.193_udp@PTR;check="$$(dig +tcp +noall +answer +search 193.94.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.94.193_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 14:01:14.328: INFO: Unable to read wheezy_udp@dns-test-service.dns-7066.svc.cluster.local from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.358: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7066.svc.cluster.local from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.369: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7066.svc.cluster.local from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.376: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7066.svc.cluster.local from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.382: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7066.svc.cluster.local from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.385: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7066.svc.cluster.local from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.390: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.399: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.404: INFO: Unable to read 10.107.94.193_udp@PTR from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.408: INFO: Unable to read 10.107.94.193_tcp@PTR from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.413: INFO: Unable to read jessie_udp@dns-test-service.dns-7066.svc.cluster.local from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.419: INFO: Unable to read jessie_tcp@dns-test-service.dns-7066.svc.cluster.local from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.424: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7066.svc.cluster.local from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.430: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7066.svc.cluster.local from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.436: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7066.svc.cluster.local from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.441: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7066.svc.cluster.local from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.453: INFO: Unable to read jessie_udp@PodARecord from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.458: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.463: INFO: Unable to read 10.107.94.193_udp@PTR from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.473: INFO: Unable to read 10.107.94.193_tcp@PTR from pod dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4: the server could not find the requested resource (get pods dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4)
Jan 23 14:01:14.473: INFO: Lookups using dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4 failed for: [wheezy_udp@dns-test-service.dns-7066.svc.cluster.local wheezy_tcp@dns-test-service.dns-7066.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7066.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7066.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-7066.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-7066.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.107.94.193_udp@PTR 10.107.94.193_tcp@PTR jessie_udp@dns-test-service.dns-7066.svc.cluster.local jessie_tcp@dns-test-service.dns-7066.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7066.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7066.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7066.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7066.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.107.94.193_udp@PTR 10.107.94.193_tcp@PTR]

Jan 23 14:01:19.650: INFO: DNS probes using dns-7066/dns-test-fefe86f5-78d1-4a33-96ab-14cab2c13ab4 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:01:19.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7066" for this suite.
Jan 23 14:01:25.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:01:26.122: INFO: namespace dns-7066 deletion completed in 6.148804352s

• [SLOW TEST:24.216 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:01:26.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 23 14:01:26.246: INFO: Waiting up to 5m0s for pod "pod-66674f4c-76ff-4462-bcd4-4f07ca1f237a" in namespace "emptydir-888" to be "success or failure"
Jan 23 14:01:26.264: INFO: Pod "pod-66674f4c-76ff-4462-bcd4-4f07ca1f237a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.913799ms
Jan 23 14:01:28.272: INFO: Pod "pod-66674f4c-76ff-4462-bcd4-4f07ca1f237a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025436307s
Jan 23 14:01:30.280: INFO: Pod "pod-66674f4c-76ff-4462-bcd4-4f07ca1f237a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034065341s
Jan 23 14:01:32.293: INFO: Pod "pod-66674f4c-76ff-4462-bcd4-4f07ca1f237a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046485194s
Jan 23 14:01:34.304: INFO: Pod "pod-66674f4c-76ff-4462-bcd4-4f07ca1f237a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058087836s
STEP: Saw pod success
Jan 23 14:01:34.305: INFO: Pod "pod-66674f4c-76ff-4462-bcd4-4f07ca1f237a" satisfied condition "success or failure"
Jan 23 14:01:34.317: INFO: Trying to get logs from node iruya-node pod pod-66674f4c-76ff-4462-bcd4-4f07ca1f237a container test-container: 
STEP: delete the pod
Jan 23 14:01:34.418: INFO: Waiting for pod pod-66674f4c-76ff-4462-bcd4-4f07ca1f237a to disappear
Jan 23 14:01:34.430: INFO: Pod pod-66674f4c-76ff-4462-bcd4-4f07ca1f237a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:01:34.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-888" for this suite.
Jan 23 14:01:40.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:01:40.647: INFO: namespace emptydir-888 deletion completed in 6.204617387s

• [SLOW TEST:14.524 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:01:40.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan 23 14:01:40.802: INFO: Waiting up to 5m0s for pod "var-expansion-e332ea9e-114a-4613-a32e-1bf93f767876" in namespace "var-expansion-9847" to be "success or failure"
Jan 23 14:01:40.809: INFO: Pod "var-expansion-e332ea9e-114a-4613-a32e-1bf93f767876": Phase="Pending", Reason="", readiness=false. Elapsed: 6.70751ms
Jan 23 14:01:43.019: INFO: Pod "var-expansion-e332ea9e-114a-4613-a32e-1bf93f767876": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216575556s
Jan 23 14:01:45.030: INFO: Pod "var-expansion-e332ea9e-114a-4613-a32e-1bf93f767876": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228066093s
Jan 23 14:01:47.044: INFO: Pod "var-expansion-e332ea9e-114a-4613-a32e-1bf93f767876": Phase="Pending", Reason="", readiness=false. Elapsed: 6.24144217s
Jan 23 14:01:49.051: INFO: Pod "var-expansion-e332ea9e-114a-4613-a32e-1bf93f767876": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249117864s
Jan 23 14:01:51.060: INFO: Pod "var-expansion-e332ea9e-114a-4613-a32e-1bf93f767876": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.258283451s
STEP: Saw pod success
Jan 23 14:01:51.060: INFO: Pod "var-expansion-e332ea9e-114a-4613-a32e-1bf93f767876" satisfied condition "success or failure"
Jan 23 14:01:51.066: INFO: Trying to get logs from node iruya-node pod var-expansion-e332ea9e-114a-4613-a32e-1bf93f767876 container dapi-container: 
STEP: delete the pod
Jan 23 14:01:51.208: INFO: Waiting for pod var-expansion-e332ea9e-114a-4613-a32e-1bf93f767876 to disappear
Jan 23 14:01:51.215: INFO: Pod var-expansion-e332ea9e-114a-4613-a32e-1bf93f767876 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:01:51.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9847" for this suite.
Jan 23 14:01:57.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:01:57.365: INFO: namespace var-expansion-9847 deletion completed in 6.14304891s

• [SLOW TEST:16.717 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:01:57.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 14:01:57.468: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e72260ea-27d5-4bdd-830a-e61ef3b140c5" in namespace "downward-api-7537" to be "success or failure"
Jan 23 14:01:57.495: INFO: Pod "downwardapi-volume-e72260ea-27d5-4bdd-830a-e61ef3b140c5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.991026ms
Jan 23 14:01:59.509: INFO: Pod "downwardapi-volume-e72260ea-27d5-4bdd-830a-e61ef3b140c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041007632s
Jan 23 14:02:01.525: INFO: Pod "downwardapi-volume-e72260ea-27d5-4bdd-830a-e61ef3b140c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057222461s
Jan 23 14:02:03.535: INFO: Pod "downwardapi-volume-e72260ea-27d5-4bdd-830a-e61ef3b140c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067193316s
Jan 23 14:02:05.553: INFO: Pod "downwardapi-volume-e72260ea-27d5-4bdd-830a-e61ef3b140c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08492876s
STEP: Saw pod success
Jan 23 14:02:05.554: INFO: Pod "downwardapi-volume-e72260ea-27d5-4bdd-830a-e61ef3b140c5" satisfied condition "success or failure"
Jan 23 14:02:05.560: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e72260ea-27d5-4bdd-830a-e61ef3b140c5 container client-container: 
STEP: delete the pod
Jan 23 14:02:05.622: INFO: Waiting for pod downwardapi-volume-e72260ea-27d5-4bdd-830a-e61ef3b140c5 to disappear
Jan 23 14:02:05.627: INFO: Pod downwardapi-volume-e72260ea-27d5-4bdd-830a-e61ef3b140c5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:02:05.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7537" for this suite.
Jan 23 14:02:11.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:02:11.906: INFO: namespace downward-api-7537 deletion completed in 6.272938273s

• [SLOW TEST:14.541 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:02:11.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4498.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4498.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 14:02:24.148: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4498/dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8: the server could not find the requested resource (get pods dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8)
Jan 23 14:02:24.155: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4498/dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8: the server could not find the requested resource (get pods dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8)
Jan 23 14:02:24.159: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4498/dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8: the server could not find the requested resource (get pods dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8)
Jan 23 14:02:24.169: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4498/dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8: the server could not find the requested resource (get pods dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8)
Jan 23 14:02:24.172: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4498/dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8: the server could not find the requested resource (get pods dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8)
Jan 23 14:02:24.176: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4498/dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8: the server could not find the requested resource (get pods dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8)
Jan 23 14:02:24.181: INFO: Unable to read jessie_udp@PodARecord from pod dns-4498/dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8: the server could not find the requested resource (get pods dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8)
Jan 23 14:02:24.186: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4498/dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8: the server could not find the requested resource (get pods dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8)
Jan 23 14:02:24.186: INFO: Lookups using dns-4498/dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 23 14:02:29.237: INFO: DNS probes using dns-4498/dns-test-af32feed-0233-42be-b1b2-a4dd71bc40d8 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:02:29.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4498" for this suite.
Jan 23 14:02:35.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:02:35.562: INFO: namespace dns-4498 deletion completed in 6.231746437s

• [SLOW TEST:23.655 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:02:35.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 23 14:02:35.835: INFO: Number of nodes with available pods: 0
Jan 23 14:02:35.835: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:02:37.888: INFO: Number of nodes with available pods: 0
Jan 23 14:02:37.889: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:02:38.867: INFO: Number of nodes with available pods: 0
Jan 23 14:02:38.867: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:02:39.863: INFO: Number of nodes with available pods: 0
Jan 23 14:02:39.863: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:02:41.404: INFO: Number of nodes with available pods: 0
Jan 23 14:02:41.404: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:02:42.545: INFO: Number of nodes with available pods: 0
Jan 23 14:02:42.545: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:02:42.845: INFO: Number of nodes with available pods: 0
Jan 23 14:02:42.845: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:02:43.856: INFO: Number of nodes with available pods: 0
Jan 23 14:02:43.856: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:02:44.852: INFO: Number of nodes with available pods: 1
Jan 23 14:02:44.852: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:02:45.854: INFO: Number of nodes with available pods: 2
Jan 23 14:02:45.854: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 23 14:02:45.924: INFO: Number of nodes with available pods: 1
Jan 23 14:02:45.924: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:46.960: INFO: Number of nodes with available pods: 1
Jan 23 14:02:46.961: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:47.954: INFO: Number of nodes with available pods: 1
Jan 23 14:02:47.954: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:49.382: INFO: Number of nodes with available pods: 1
Jan 23 14:02:49.382: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:49.942: INFO: Number of nodes with available pods: 1
Jan 23 14:02:49.942: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:50.950: INFO: Number of nodes with available pods: 1
Jan 23 14:02:50.950: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:51.938: INFO: Number of nodes with available pods: 1
Jan 23 14:02:51.938: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:52.974: INFO: Number of nodes with available pods: 1
Jan 23 14:02:52.974: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:53.949: INFO: Number of nodes with available pods: 1
Jan 23 14:02:53.949: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:54.943: INFO: Number of nodes with available pods: 1
Jan 23 14:02:54.944: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:55.940: INFO: Number of nodes with available pods: 1
Jan 23 14:02:55.940: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:56.940: INFO: Number of nodes with available pods: 1
Jan 23 14:02:56.940: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:58.042: INFO: Number of nodes with available pods: 1
Jan 23 14:02:58.042: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:59.346: INFO: Number of nodes with available pods: 1
Jan 23 14:02:59.346: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:02:59.941: INFO: Number of nodes with available pods: 1
Jan 23 14:02:59.941: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:03:00.956: INFO: Number of nodes with available pods: 1
Jan 23 14:03:00.956: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:03:01.949: INFO: Number of nodes with available pods: 1
Jan 23 14:03:01.949: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:03:03.216: INFO: Number of nodes with available pods: 1
Jan 23 14:03:03.216: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:03:03.998: INFO: Number of nodes with available pods: 1
Jan 23 14:03:03.998: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:03:04.941: INFO: Number of nodes with available pods: 2
Jan 23 14:03:04.941: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4086, will wait for the garbage collector to delete the pods
Jan 23 14:03:05.006: INFO: Deleting DaemonSet.extensions daemon-set took: 9.285501ms
Jan 23 14:03:05.307: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.95562ms
Jan 23 14:03:17.964: INFO: Number of nodes with available pods: 0
Jan 23 14:03:17.964: INFO: Number of running nodes: 0, number of available pods: 0
Jan 23 14:03:17.968: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4086/daemonsets","resourceVersion":"21566003"},"items":null}

Jan 23 14:03:17.971: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4086/pods","resourceVersion":"21566003"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:03:17.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4086" for this suite.
Jan 23 14:03:24.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:03:24.115: INFO: namespace daemonsets-4086 deletion completed in 6.126974195s

• [SLOW TEST:48.551 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:03:24.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-8e98f6a1-49fc-42f7-a1c6-d9dda853415a
STEP: Creating a pod to test consume secrets
Jan 23 14:03:24.233: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e2789289-238d-4e2b-a1c3-139095753a0b" in namespace "projected-9710" to be "success or failure"
Jan 23 14:03:24.260: INFO: Pod "pod-projected-secrets-e2789289-238d-4e2b-a1c3-139095753a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.769513ms
Jan 23 14:03:26.267: INFO: Pod "pod-projected-secrets-e2789289-238d-4e2b-a1c3-139095753a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034066755s
Jan 23 14:03:28.281: INFO: Pod "pod-projected-secrets-e2789289-238d-4e2b-a1c3-139095753a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047748922s
Jan 23 14:03:30.429: INFO: Pod "pod-projected-secrets-e2789289-238d-4e2b-a1c3-139095753a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.195289455s
Jan 23 14:03:32.443: INFO: Pod "pod-projected-secrets-e2789289-238d-4e2b-a1c3-139095753a0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.209311877s
STEP: Saw pod success
Jan 23 14:03:32.443: INFO: Pod "pod-projected-secrets-e2789289-238d-4e2b-a1c3-139095753a0b" satisfied condition "success or failure"
Jan 23 14:03:32.448: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e2789289-238d-4e2b-a1c3-139095753a0b container projected-secret-volume-test: 
STEP: delete the pod
Jan 23 14:03:32.595: INFO: Waiting for pod pod-projected-secrets-e2789289-238d-4e2b-a1c3-139095753a0b to disappear
Jan 23 14:03:32.610: INFO: Pod pod-projected-secrets-e2789289-238d-4e2b-a1c3-139095753a0b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:03:32.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9710" for this suite.
Jan 23 14:03:38.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:03:38.747: INFO: namespace projected-9710 deletion completed in 6.123076326s

• [SLOW TEST:14.632 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:03:38.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-3d5d6658-be71-4e8f-b1be-69eb60ba628d
STEP: Creating configMap with name cm-test-opt-upd-36d71a95-dc55-4bd4-b93b-61e0230279c9
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-3d5d6658-be71-4e8f-b1be-69eb60ba628d
STEP: Updating configmap cm-test-opt-upd-36d71a95-dc55-4bd4-b93b-61e0230279c9
STEP: Creating configMap with name cm-test-opt-create-c3293427-4d59-42c7-8b8a-5688f289763c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:05:02.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6343" for this suite.
Jan 23 14:05:26.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:05:26.855: INFO: namespace configmap-6343 deletion completed in 24.162059552s

• [SLOW TEST:108.108 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:05:26.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6745.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6745.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 14:05:37.058: INFO: File wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local from pod  dns-6745/dns-test-669f9594-7e87-4555-911c-1ceb991ce9bd contains '' instead of 'foo.example.com.'
Jan 23 14:05:37.066: INFO: File jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local from pod  dns-6745/dns-test-669f9594-7e87-4555-911c-1ceb991ce9bd contains '' instead of 'foo.example.com.'
Jan 23 14:05:37.066: INFO: Lookups using dns-6745/dns-test-669f9594-7e87-4555-911c-1ceb991ce9bd failed for: [wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local]

Jan 23 14:05:42.083: INFO: DNS probes using dns-test-669f9594-7e87-4555-911c-1ceb991ce9bd succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6745.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6745.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 14:05:58.273: INFO: File wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local from pod  dns-6745/dns-test-00d264d9-337a-4faf-a833-5f5151ba78d1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 23 14:05:58.280: INFO: File jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local from pod  dns-6745/dns-test-00d264d9-337a-4faf-a833-5f5151ba78d1 contains '' instead of 'bar.example.com.'
Jan 23 14:05:58.280: INFO: Lookups using dns-6745/dns-test-00d264d9-337a-4faf-a833-5f5151ba78d1 failed for: [wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local]

Jan 23 14:06:03.293: INFO: File wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local from pod  dns-6745/dns-test-00d264d9-337a-4faf-a833-5f5151ba78d1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 23 14:06:03.299: INFO: File jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local from pod  dns-6745/dns-test-00d264d9-337a-4faf-a833-5f5151ba78d1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 23 14:06:03.299: INFO: Lookups using dns-6745/dns-test-00d264d9-337a-4faf-a833-5f5151ba78d1 failed for: [wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local]

Jan 23 14:06:08.302: INFO: File wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local from pod  dns-6745/dns-test-00d264d9-337a-4faf-a833-5f5151ba78d1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 23 14:06:08.310: INFO: File jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local from pod  dns-6745/dns-test-00d264d9-337a-4faf-a833-5f5151ba78d1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 23 14:06:08.310: INFO: Lookups using dns-6745/dns-test-00d264d9-337a-4faf-a833-5f5151ba78d1 failed for: [wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local]

Jan 23 14:06:13.302: INFO: DNS probes using dns-test-00d264d9-337a-4faf-a833-5f5151ba78d1 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6745.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6745.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 23 14:06:27.582: INFO: File wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local from pod  dns-6745/dns-test-820eac61-40c8-40c8-b586-4af4af081d4a contains '' instead of '10.102.126.141'
Jan 23 14:06:27.594: INFO: File jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local from pod  dns-6745/dns-test-820eac61-40c8-40c8-b586-4af4af081d4a contains '' instead of '10.102.126.141'
Jan 23 14:06:27.594: INFO: Lookups using dns-6745/dns-test-820eac61-40c8-40c8-b586-4af4af081d4a failed for: [wheezy_udp@dns-test-service-3.dns-6745.svc.cluster.local jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local]

Jan 23 14:06:32.629: INFO: File jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local from pod  dns-6745/dns-test-820eac61-40c8-40c8-b586-4af4af081d4a contains '' instead of '10.102.126.141'
Jan 23 14:06:32.629: INFO: Lookups using dns-6745/dns-test-820eac61-40c8-40c8-b586-4af4af081d4a failed for: [jessie_udp@dns-test-service-3.dns-6745.svc.cluster.local]

Jan 23 14:06:37.617: INFO: DNS probes using dns-test-820eac61-40c8-40c8-b586-4af4af081d4a succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:06:37.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6745" for this suite.
Jan 23 14:06:43.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:06:44.057: INFO: namespace dns-6745 deletion completed in 6.148716955s

• [SLOW TEST:77.201 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:06:44.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 23 14:06:44.148: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 23 14:06:44.161: INFO: Waiting for terminating namespaces to be deleted...
Jan 23 14:06:44.189: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 23 14:06:44.206: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 23 14:06:44.206: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 23 14:06:44.206: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 23 14:06:44.206: INFO: 	Container weave ready: true, restart count 0
Jan 23 14:06:44.206: INFO: 	Container weave-npc ready: true, restart count 0
Jan 23 14:06:44.206: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 23 14:06:44.217: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 23 14:06:44.218: INFO: 	Container etcd ready: true, restart count 0
Jan 23 14:06:44.218: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 23 14:06:44.218: INFO: 	Container weave ready: true, restart count 0
Jan 23 14:06:44.218: INFO: 	Container weave-npc ready: true, restart count 0
Jan 23 14:06:44.218: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 23 14:06:44.218: INFO: 	Container coredns ready: true, restart count 0
Jan 23 14:06:44.218: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 23 14:06:44.218: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 23 14:06:44.218: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 23 14:06:44.218: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 23 14:06:44.218: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 23 14:06:44.218: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 23 14:06:44.218: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 23 14:06:44.218: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 23 14:06:44.218: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 23 14:06:44.218: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan 23 14:06:44.293: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 23 14:06:44.293: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 23 14:06:44.293: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 23 14:06:44.293: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan 23 14:06:44.293: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan 23 14:06:44.293: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 23 14:06:44.293: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan 23 14:06:44.293: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 23 14:06:44.293: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan 23 14:06:44.293: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-27b17d75-6843-4aa7-af2f-2f6ac49ecdaa.15ec892f0a85632f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5128/filler-pod-27b17d75-6843-4aa7-af2f-2f6ac49ecdaa to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-27b17d75-6843-4aa7-af2f-2f6ac49ecdaa.15ec89303008ca72], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-27b17d75-6843-4aa7-af2f-2f6ac49ecdaa.15ec8930f0da9a82], Reason = [Created], Message = [Created container filler-pod-27b17d75-6843-4aa7-af2f-2f6ac49ecdaa]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-27b17d75-6843-4aa7-af2f-2f6ac49ecdaa.15ec893119ab8803], Reason = [Started], Message = [Started container filler-pod-27b17d75-6843-4aa7-af2f-2f6ac49ecdaa]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-63b0c0a2-f194-4254-9a2a-ef6cd92162b5.15ec892f032997db], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5128/filler-pod-63b0c0a2-f194-4254-9a2a-ef6cd92162b5 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-63b0c0a2-f194-4254-9a2a-ef6cd92162b5.15ec893014ffae1a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-63b0c0a2-f194-4254-9a2a-ef6cd92162b5.15ec8930f595198f], Reason = [Created], Message = [Created container filler-pod-63b0c0a2-f194-4254-9a2a-ef6cd92162b5]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-63b0c0a2-f194-4254-9a2a-ef6cd92162b5.15ec89311cbc11f0], Reason = [Started], Message = [Started container filler-pod-63b0c0a2-f194-4254-9a2a-ef6cd92162b5]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ec8931610f3d04], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:06:55.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5128" for this suite.
Jan 23 14:07:03.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:07:03.799: INFO: namespace sched-pred-5128 deletion completed in 8.198817617s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:19.742 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:07:03.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-0c57b996-403a-4831-a821-0f458cf00ae0 in namespace container-probe-450
Jan 23 14:07:12.655: INFO: Started pod liveness-0c57b996-403a-4831-a821-0f458cf00ae0 in namespace container-probe-450
STEP: checking the pod's current state and verifying that restartCount is present
Jan 23 14:07:12.659: INFO: Initial restart count of pod liveness-0c57b996-403a-4831-a821-0f458cf00ae0 is 0
Jan 23 14:07:32.872: INFO: Restart count of pod container-probe-450/liveness-0c57b996-403a-4831-a821-0f458cf00ae0 is now 1 (20.212142978s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:07:32.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-450" for this suite.
Jan 23 14:07:38.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:07:39.207: INFO: namespace container-probe-450 deletion completed in 6.279907275s

• [SLOW TEST:35.406 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:07:39.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-7gwc
STEP: Creating a pod to test atomic-volume-subpath
Jan 23 14:07:39.368: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7gwc" in namespace "subpath-9447" to be "success or failure"
Jan 23 14:07:39.386: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.530834ms
Jan 23 14:07:41.401: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032453066s
Jan 23 14:07:43.408: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040353993s
Jan 23 14:07:45.510: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141789214s
Jan 23 14:07:47.518: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Running", Reason="", readiness=true. Elapsed: 8.149872766s
Jan 23 14:07:49.532: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Running", Reason="", readiness=true. Elapsed: 10.163594879s
Jan 23 14:07:51.541: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Running", Reason="", readiness=true. Elapsed: 12.173148208s
Jan 23 14:07:53.551: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Running", Reason="", readiness=true. Elapsed: 14.183051598s
Jan 23 14:07:55.563: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Running", Reason="", readiness=true. Elapsed: 16.194472943s
Jan 23 14:07:57.572: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Running", Reason="", readiness=true. Elapsed: 18.203449167s
Jan 23 14:07:59.594: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Running", Reason="", readiness=true. Elapsed: 20.225874773s
Jan 23 14:08:01.614: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Running", Reason="", readiness=true. Elapsed: 22.24573787s
Jan 23 14:08:03.628: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Running", Reason="", readiness=true. Elapsed: 24.260065975s
Jan 23 14:08:05.643: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Running", Reason="", readiness=true. Elapsed: 26.274664447s
Jan 23 14:08:07.650: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Running", Reason="", readiness=true. Elapsed: 28.281696502s
Jan 23 14:08:09.703: INFO: Pod "pod-subpath-test-configmap-7gwc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.334591147s
STEP: Saw pod success
Jan 23 14:08:09.703: INFO: Pod "pod-subpath-test-configmap-7gwc" satisfied condition "success or failure"
Jan 23 14:08:09.710: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-7gwc container test-container-subpath-configmap-7gwc: 
STEP: delete the pod
Jan 23 14:08:09.761: INFO: Waiting for pod pod-subpath-test-configmap-7gwc to disappear
Jan 23 14:08:09.774: INFO: Pod pod-subpath-test-configmap-7gwc no longer exists
STEP: Deleting pod pod-subpath-test-configmap-7gwc
Jan 23 14:08:09.774: INFO: Deleting pod "pod-subpath-test-configmap-7gwc" in namespace "subpath-9447"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:08:09.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9447" for this suite.
Jan 23 14:08:15.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:08:16.006: INFO: namespace subpath-9447 deletion completed in 6.216330511s

• [SLOW TEST:36.798 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:08:16.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-lkkc
STEP: Creating a pod to test atomic-volume-subpath
Jan 23 14:08:16.176: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lkkc" in namespace "subpath-4230" to be "success or failure"
Jan 23 14:08:16.189: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.099997ms
Jan 23 14:08:18.200: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023202896s
Jan 23 14:08:20.207: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031131063s
Jan 23 14:08:22.259: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082395378s
Jan 23 14:08:24.271: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Running", Reason="", readiness=true. Elapsed: 8.095018688s
Jan 23 14:08:26.280: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Running", Reason="", readiness=true. Elapsed: 10.103766772s
Jan 23 14:08:28.291: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Running", Reason="", readiness=true. Elapsed: 12.114310337s
Jan 23 14:08:30.303: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Running", Reason="", readiness=true. Elapsed: 14.126863733s
Jan 23 14:08:32.315: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Running", Reason="", readiness=true. Elapsed: 16.138370868s
Jan 23 14:08:34.322: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Running", Reason="", readiness=true. Elapsed: 18.145459614s
Jan 23 14:08:36.330: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Running", Reason="", readiness=true. Elapsed: 20.153585518s
Jan 23 14:08:38.341: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Running", Reason="", readiness=true. Elapsed: 22.164201511s
Jan 23 14:08:40.349: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Running", Reason="", readiness=true. Elapsed: 24.172757055s
Jan 23 14:08:42.557: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Running", Reason="", readiness=true. Elapsed: 26.380770175s
Jan 23 14:08:44.572: INFO: Pod "pod-subpath-test-downwardapi-lkkc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.395451721s
STEP: Saw pod success
Jan 23 14:08:44.572: INFO: Pod "pod-subpath-test-downwardapi-lkkc" satisfied condition "success or failure"
Jan 23 14:08:44.578: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-lkkc container test-container-subpath-downwardapi-lkkc: 
STEP: delete the pod
Jan 23 14:08:44.652: INFO: Waiting for pod pod-subpath-test-downwardapi-lkkc to disappear
Jan 23 14:08:44.711: INFO: Pod pod-subpath-test-downwardapi-lkkc no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-lkkc
Jan 23 14:08:44.711: INFO: Deleting pod "pod-subpath-test-downwardapi-lkkc" in namespace "subpath-4230"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:08:44.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4230" for this suite.
Jan 23 14:08:50.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:08:50.963: INFO: namespace subpath-4230 deletion completed in 6.240780012s

• [SLOW TEST:34.957 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:08:50.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 14:08:51.078: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5f30b78-087c-4288-91b5-4feac583f5ed" in namespace "downward-api-6400" to be "success or failure"
Jan 23 14:08:51.085: INFO: Pod "downwardapi-volume-a5f30b78-087c-4288-91b5-4feac583f5ed": Phase="Pending", Reason="", readiness=false. Elapsed: 7.046011ms
Jan 23 14:08:53.093: INFO: Pod "downwardapi-volume-a5f30b78-087c-4288-91b5-4feac583f5ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014921686s
Jan 23 14:08:55.101: INFO: Pod "downwardapi-volume-a5f30b78-087c-4288-91b5-4feac583f5ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022749883s
Jan 23 14:08:57.109: INFO: Pod "downwardapi-volume-a5f30b78-087c-4288-91b5-4feac583f5ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030863376s
Jan 23 14:08:59.121: INFO: Pod "downwardapi-volume-a5f30b78-087c-4288-91b5-4feac583f5ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042775482s
STEP: Saw pod success
Jan 23 14:08:59.121: INFO: Pod "downwardapi-volume-a5f30b78-087c-4288-91b5-4feac583f5ed" satisfied condition "success or failure"
Jan 23 14:08:59.127: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a5f30b78-087c-4288-91b5-4feac583f5ed container client-container: 
STEP: delete the pod
Jan 23 14:08:59.326: INFO: Waiting for pod downwardapi-volume-a5f30b78-087c-4288-91b5-4feac583f5ed to disappear
Jan 23 14:08:59.335: INFO: Pod downwardapi-volume-a5f30b78-087c-4288-91b5-4feac583f5ed no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:08:59.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6400" for this suite.
Jan 23 14:09:05.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:09:05.497: INFO: namespace downward-api-6400 deletion completed in 6.155026373s

• [SLOW TEST:14.534 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:09:05.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0123 14:09:16.341868       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 23 14:09:16.341: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:09:16.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6465" for this suite.
Jan 23 14:09:22.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:09:22.554: INFO: namespace gc-6465 deletion completed in 6.206670663s

• [SLOW TEST:17.057 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:09:22.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 14:09:22.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:09:31.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4445" for this suite.
Jan 23 14:10:23.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:10:23.384: INFO: namespace pods-4445 deletion completed in 52.266740385s

• [SLOW TEST:60.828 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:10:23.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 23 14:10:23.533: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3855,SelfLink:/api/v1/namespaces/watch-3855/configmaps/e2e-watch-test-label-changed,UID:0749c3f2-6a7d-4894-a34b-20fdd1c81dbf,ResourceVersion:21566999,Generation:0,CreationTimestamp:2020-01-23 14:10:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 23 14:10:23.533: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3855,SelfLink:/api/v1/namespaces/watch-3855/configmaps/e2e-watch-test-label-changed,UID:0749c3f2-6a7d-4894-a34b-20fdd1c81dbf,ResourceVersion:21567000,Generation:0,CreationTimestamp:2020-01-23 14:10:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 23 14:10:23.533: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3855,SelfLink:/api/v1/namespaces/watch-3855/configmaps/e2e-watch-test-label-changed,UID:0749c3f2-6a7d-4894-a34b-20fdd1c81dbf,ResourceVersion:21567001,Generation:0,CreationTimestamp:2020-01-23 14:10:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 23 14:10:33.643: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3855,SelfLink:/api/v1/namespaces/watch-3855/configmaps/e2e-watch-test-label-changed,UID:0749c3f2-6a7d-4894-a34b-20fdd1c81dbf,ResourceVersion:21567016,Generation:0,CreationTimestamp:2020-01-23 14:10:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 23 14:10:33.645: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3855,SelfLink:/api/v1/namespaces/watch-3855/configmaps/e2e-watch-test-label-changed,UID:0749c3f2-6a7d-4894-a34b-20fdd1c81dbf,ResourceVersion:21567017,Generation:0,CreationTimestamp:2020-01-23 14:10:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 23 14:10:33.645: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3855,SelfLink:/api/v1/namespaces/watch-3855/configmaps/e2e-watch-test-label-changed,UID:0749c3f2-6a7d-4894-a34b-20fdd1c81dbf,ResourceVersion:21567018,Generation:0,CreationTimestamp:2020-01-23 14:10:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:10:33.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3855" for this suite.
Jan 23 14:10:39.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:10:39.822: INFO: namespace watch-3855 deletion completed in 6.168075484s

• [SLOW TEST:16.437 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:10:39.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 23 14:10:56.115: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 14:10:56.135: INFO: Pod pod-with-poststart-http-hook still exists
Jan 23 14:10:58.136: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 14:10:58.151: INFO: Pod pod-with-poststart-http-hook still exists
Jan 23 14:11:00.136: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 14:11:00.146: INFO: Pod pod-with-poststart-http-hook still exists
Jan 23 14:11:02.136: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 14:11:02.157: INFO: Pod pod-with-poststart-http-hook still exists
Jan 23 14:11:04.136: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 14:11:04.143: INFO: Pod pod-with-poststart-http-hook still exists
Jan 23 14:11:06.136: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 14:11:06.149: INFO: Pod pod-with-poststart-http-hook still exists
Jan 23 14:11:08.136: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 23 14:11:08.145: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:11:08.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9061" for this suite.
Jan 23 14:11:30.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:11:30.289: INFO: namespace container-lifecycle-hook-9061 deletion completed in 22.137245682s

• [SLOW TEST:50.465 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:11:30.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-23b0761d-63a4-4198-a77e-3bc43042ccb4
STEP: Creating a pod to test consume configMaps
Jan 23 14:11:30.428: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3e72af3d-c7f8-4cde-a12f-1068772fcea9" in namespace "projected-6826" to be "success or failure"
Jan 23 14:11:30.438: INFO: Pod "pod-projected-configmaps-3e72af3d-c7f8-4cde-a12f-1068772fcea9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.238095ms
Jan 23 14:11:32.447: INFO: Pod "pod-projected-configmaps-3e72af3d-c7f8-4cde-a12f-1068772fcea9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018970688s
Jan 23 14:11:34.456: INFO: Pod "pod-projected-configmaps-3e72af3d-c7f8-4cde-a12f-1068772fcea9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028463893s
Jan 23 14:11:36.482: INFO: Pod "pod-projected-configmaps-3e72af3d-c7f8-4cde-a12f-1068772fcea9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053823512s
Jan 23 14:11:38.497: INFO: Pod "pod-projected-configmaps-3e72af3d-c7f8-4cde-a12f-1068772fcea9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069475365s
STEP: Saw pod success
Jan 23 14:11:38.498: INFO: Pod "pod-projected-configmaps-3e72af3d-c7f8-4cde-a12f-1068772fcea9" satisfied condition "success or failure"
Jan 23 14:11:38.503: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-3e72af3d-c7f8-4cde-a12f-1068772fcea9 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 23 14:11:39.052: INFO: Waiting for pod pod-projected-configmaps-3e72af3d-c7f8-4cde-a12f-1068772fcea9 to disappear
Jan 23 14:11:39.064: INFO: Pod pod-projected-configmaps-3e72af3d-c7f8-4cde-a12f-1068772fcea9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:11:39.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6826" for this suite.
Jan 23 14:11:45.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:11:45.341: INFO: namespace projected-6826 deletion completed in 6.224909798s

• [SLOW TEST:15.052 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:11:45.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 23 14:11:45.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1060'
Jan 23 14:11:47.618: INFO: stderr: ""
Jan 23 14:11:47.618: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 23 14:11:48.634: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:11:48.634: INFO: Found 0 / 1
Jan 23 14:11:49.629: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:11:49.630: INFO: Found 0 / 1
Jan 23 14:11:50.633: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:11:50.633: INFO: Found 0 / 1
Jan 23 14:11:51.632: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:11:51.633: INFO: Found 0 / 1
Jan 23 14:11:52.632: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:11:52.633: INFO: Found 0 / 1
Jan 23 14:11:53.636: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:11:53.636: INFO: Found 0 / 1
Jan 23 14:11:54.625: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:11:54.625: INFO: Found 1 / 1
Jan 23 14:11:54.625: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 23 14:11:54.629: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:11:54.629: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 23 14:11:54.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-xzjjf --namespace=kubectl-1060 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 23 14:11:54.735: INFO: stderr: ""
Jan 23 14:11:54.735: INFO: stdout: "pod/redis-master-xzjjf patched\n"
STEP: checking annotations
Jan 23 14:11:54.744: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:11:54.744: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:11:54.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1060" for this suite.
Jan 23 14:12:16.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:12:16.926: INFO: namespace kubectl-1060 deletion completed in 22.176162075s

• [SLOW TEST:31.584 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:12:16.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-65c5b74c-f854-4b62-94a2-0fbbe3878b13
STEP: Creating a pod to test consume secrets
Jan 23 14:12:17.142: INFO: Waiting up to 5m0s for pod "pod-secrets-246908fe-82f9-4279-b49a-348e2bcd4258" in namespace "secrets-1683" to be "success or failure"
Jan 23 14:12:17.195: INFO: Pod "pod-secrets-246908fe-82f9-4279-b49a-348e2bcd4258": Phase="Pending", Reason="", readiness=false. Elapsed: 52.597182ms
Jan 23 14:12:19.205: INFO: Pod "pod-secrets-246908fe-82f9-4279-b49a-348e2bcd4258": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062577504s
Jan 23 14:12:21.260: INFO: Pod "pod-secrets-246908fe-82f9-4279-b49a-348e2bcd4258": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117358923s
Jan 23 14:12:23.274: INFO: Pod "pod-secrets-246908fe-82f9-4279-b49a-348e2bcd4258": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131311505s
Jan 23 14:12:25.284: INFO: Pod "pod-secrets-246908fe-82f9-4279-b49a-348e2bcd4258": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.141215106s
STEP: Saw pod success
Jan 23 14:12:25.284: INFO: Pod "pod-secrets-246908fe-82f9-4279-b49a-348e2bcd4258" satisfied condition "success or failure"
Jan 23 14:12:25.287: INFO: Trying to get logs from node iruya-node pod pod-secrets-246908fe-82f9-4279-b49a-348e2bcd4258 container secret-volume-test: 
STEP: delete the pod
Jan 23 14:12:26.242: INFO: Waiting for pod pod-secrets-246908fe-82f9-4279-b49a-348e2bcd4258 to disappear
Jan 23 14:12:26.249: INFO: Pod pod-secrets-246908fe-82f9-4279-b49a-348e2bcd4258 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:12:26.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1683" for this suite.
Jan 23 14:12:32.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:12:32.548: INFO: namespace secrets-1683 deletion completed in 6.281077618s

• [SLOW TEST:15.621 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:12:32.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-4953/configmap-test-6908f955-db06-4c73-aa51-a1df248ec873
STEP: Creating a pod to test consume configMaps
Jan 23 14:12:32.697: INFO: Waiting up to 5m0s for pod "pod-configmaps-14049607-095a-4d5e-9f7f-abea4d33ebae" in namespace "configmap-4953" to be "success or failure"
Jan 23 14:12:32.795: INFO: Pod "pod-configmaps-14049607-095a-4d5e-9f7f-abea4d33ebae": Phase="Pending", Reason="", readiness=false. Elapsed: 97.457295ms
Jan 23 14:12:34.803: INFO: Pod "pod-configmaps-14049607-095a-4d5e-9f7f-abea4d33ebae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105543746s
Jan 23 14:12:36.828: INFO: Pod "pod-configmaps-14049607-095a-4d5e-9f7f-abea4d33ebae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130576742s
Jan 23 14:12:38.838: INFO: Pod "pod-configmaps-14049607-095a-4d5e-9f7f-abea4d33ebae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140250059s
Jan 23 14:12:40.908: INFO: Pod "pod-configmaps-14049607-095a-4d5e-9f7f-abea4d33ebae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.210428765s
STEP: Saw pod success
Jan 23 14:12:40.908: INFO: Pod "pod-configmaps-14049607-095a-4d5e-9f7f-abea4d33ebae" satisfied condition "success or failure"
Jan 23 14:12:40.914: INFO: Trying to get logs from node iruya-node pod pod-configmaps-14049607-095a-4d5e-9f7f-abea4d33ebae container env-test: 
STEP: delete the pod
Jan 23 14:12:41.068: INFO: Waiting for pod pod-configmaps-14049607-095a-4d5e-9f7f-abea4d33ebae to disappear
Jan 23 14:12:41.075: INFO: Pod pod-configmaps-14049607-095a-4d5e-9f7f-abea4d33ebae no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:12:41.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4953" for this suite.
Jan 23 14:12:47.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:12:47.214: INFO: namespace configmap-4953 deletion completed in 6.129642068s

• [SLOW TEST:14.665 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:12:47.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-qwv8
STEP: Creating a pod to test atomic-volume-subpath
Jan 23 14:12:47.285: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qwv8" in namespace "subpath-2343" to be "success or failure"
Jan 23 14:12:47.350: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Pending", Reason="", readiness=false. Elapsed: 64.570509ms
Jan 23 14:12:49.539: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253550201s
Jan 23 14:12:51.552: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266637158s
Jan 23 14:12:53.562: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276375409s
Jan 23 14:12:55.570: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.284504801s
Jan 23 14:12:57.581: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Running", Reason="", readiness=true. Elapsed: 10.29508169s
Jan 23 14:12:59.589: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Running", Reason="", readiness=true. Elapsed: 12.303024426s
Jan 23 14:13:01.596: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Running", Reason="", readiness=true. Elapsed: 14.310902684s
Jan 23 14:13:03.607: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Running", Reason="", readiness=true. Elapsed: 16.321772502s
Jan 23 14:13:05.616: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Running", Reason="", readiness=true. Elapsed: 18.330581797s
Jan 23 14:13:07.626: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Running", Reason="", readiness=true. Elapsed: 20.340365651s
Jan 23 14:13:09.641: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Running", Reason="", readiness=true. Elapsed: 22.355747093s
Jan 23 14:13:11.652: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Running", Reason="", readiness=true. Elapsed: 24.366019343s
Jan 23 14:13:13.671: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Running", Reason="", readiness=true. Elapsed: 26.38585923s
Jan 23 14:13:15.681: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Running", Reason="", readiness=true. Elapsed: 28.395648335s
Jan 23 14:13:17.690: INFO: Pod "pod-subpath-test-projected-qwv8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.404891726s
STEP: Saw pod success
Jan 23 14:13:17.691: INFO: Pod "pod-subpath-test-projected-qwv8" satisfied condition "success or failure"
Jan 23 14:13:17.695: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-qwv8 container test-container-subpath-projected-qwv8: 
STEP: delete the pod
Jan 23 14:13:17.979: INFO: Waiting for pod pod-subpath-test-projected-qwv8 to disappear
Jan 23 14:13:17.996: INFO: Pod pod-subpath-test-projected-qwv8 no longer exists
STEP: Deleting pod pod-subpath-test-projected-qwv8
Jan 23 14:13:17.997: INFO: Deleting pod "pod-subpath-test-projected-qwv8" in namespace "subpath-2343"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:13:18.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2343" for this suite.
Jan 23 14:13:24.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:13:24.171: INFO: namespace subpath-2343 deletion completed in 6.165844294s

• [SLOW TEST:36.957 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:13:24.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 23 14:13:24.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-241'
Jan 23 14:13:24.334: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 23 14:13:24.335: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan 23 14:13:26.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-241'
Jan 23 14:13:26.537: INFO: stderr: ""
Jan 23 14:13:26.537: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:13:26.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-241" for this suite.
Jan 23 14:13:32.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:13:32.674: INFO: namespace kubectl-241 deletion completed in 6.129062416s

• [SLOW TEST:8.502 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:13:32.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-83607e3c-5b12-4fd6-9157-3e51e3cc04b5
STEP: Creating a pod to test consume secrets
Jan 23 14:13:32.832: INFO: Waiting up to 5m0s for pod "pod-secrets-f66a378f-e8e4-4e1c-974e-fcbc13a5a0fa" in namespace "secrets-9428" to be "success or failure"
Jan 23 14:13:32.836: INFO: Pod "pod-secrets-f66a378f-e8e4-4e1c-974e-fcbc13a5a0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.759742ms
Jan 23 14:13:34.846: INFO: Pod "pod-secrets-f66a378f-e8e4-4e1c-974e-fcbc13a5a0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013498277s
Jan 23 14:13:36.856: INFO: Pod "pod-secrets-f66a378f-e8e4-4e1c-974e-fcbc13a5a0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023904402s
Jan 23 14:13:38.870: INFO: Pod "pod-secrets-f66a378f-e8e4-4e1c-974e-fcbc13a5a0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037804284s
Jan 23 14:13:40.881: INFO: Pod "pod-secrets-f66a378f-e8e4-4e1c-974e-fcbc13a5a0fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048720622s
STEP: Saw pod success
Jan 23 14:13:40.881: INFO: Pod "pod-secrets-f66a378f-e8e4-4e1c-974e-fcbc13a5a0fa" satisfied condition "success or failure"
Jan 23 14:13:40.889: INFO: Trying to get logs from node iruya-node pod pod-secrets-f66a378f-e8e4-4e1c-974e-fcbc13a5a0fa container secret-volume-test: 
STEP: delete the pod
Jan 23 14:13:41.112: INFO: Waiting for pod pod-secrets-f66a378f-e8e4-4e1c-974e-fcbc13a5a0fa to disappear
Jan 23 14:13:41.119: INFO: Pod pod-secrets-f66a378f-e8e4-4e1c-974e-fcbc13a5a0fa no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:13:41.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9428" for this suite.
Jan 23 14:13:47.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:13:47.262: INFO: namespace secrets-9428 deletion completed in 6.135381188s

• [SLOW TEST:14.587 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:13:47.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 14:13:47.354: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77015fce-8fd1-4d4d-9c2b-4279a7716871" in namespace "projected-2160" to be "success or failure"
Jan 23 14:13:47.371: INFO: Pod "downwardapi-volume-77015fce-8fd1-4d4d-9c2b-4279a7716871": Phase="Pending", Reason="", readiness=false. Elapsed: 16.607629ms
Jan 23 14:13:49.382: INFO: Pod "downwardapi-volume-77015fce-8fd1-4d4d-9c2b-4279a7716871": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028174822s
Jan 23 14:13:51.390: INFO: Pod "downwardapi-volume-77015fce-8fd1-4d4d-9c2b-4279a7716871": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035505711s
Jan 23 14:13:53.397: INFO: Pod "downwardapi-volume-77015fce-8fd1-4d4d-9c2b-4279a7716871": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043131933s
Jan 23 14:13:55.415: INFO: Pod "downwardapi-volume-77015fce-8fd1-4d4d-9c2b-4279a7716871": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060902603s
STEP: Saw pod success
Jan 23 14:13:55.415: INFO: Pod "downwardapi-volume-77015fce-8fd1-4d4d-9c2b-4279a7716871" satisfied condition "success or failure"
Jan 23 14:13:55.420: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-77015fce-8fd1-4d4d-9c2b-4279a7716871 container client-container: 
STEP: delete the pod
Jan 23 14:13:55.476: INFO: Waiting for pod downwardapi-volume-77015fce-8fd1-4d4d-9c2b-4279a7716871 to disappear
Jan 23 14:13:55.613: INFO: Pod downwardapi-volume-77015fce-8fd1-4d4d-9c2b-4279a7716871 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:13:55.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2160" for this suite.
Jan 23 14:14:01.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:14:01.880: INFO: namespace projected-2160 deletion completed in 6.258111591s

• [SLOW TEST:14.617 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:14:01.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 23 14:14:02.037: INFO: Waiting up to 5m0s for pod "pod-9526af88-6fc8-4a65-ae3a-376e1c5b7536" in namespace "emptydir-4137" to be "success or failure"
Jan 23 14:14:02.046: INFO: Pod "pod-9526af88-6fc8-4a65-ae3a-376e1c5b7536": Phase="Pending", Reason="", readiness=false. Elapsed: 9.675454ms
Jan 23 14:14:04.057: INFO: Pod "pod-9526af88-6fc8-4a65-ae3a-376e1c5b7536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020753037s
Jan 23 14:14:06.065: INFO: Pod "pod-9526af88-6fc8-4a65-ae3a-376e1c5b7536": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028138617s
Jan 23 14:14:08.078: INFO: Pod "pod-9526af88-6fc8-4a65-ae3a-376e1c5b7536": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040967346s
Jan 23 14:14:10.087: INFO: Pod "pod-9526af88-6fc8-4a65-ae3a-376e1c5b7536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049943662s
STEP: Saw pod success
Jan 23 14:14:10.087: INFO: Pod "pod-9526af88-6fc8-4a65-ae3a-376e1c5b7536" satisfied condition "success or failure"
Jan 23 14:14:10.093: INFO: Trying to get logs from node iruya-node pod pod-9526af88-6fc8-4a65-ae3a-376e1c5b7536 container test-container: 
STEP: delete the pod
Jan 23 14:14:10.135: INFO: Waiting for pod pod-9526af88-6fc8-4a65-ae3a-376e1c5b7536 to disappear
Jan 23 14:14:10.140: INFO: Pod pod-9526af88-6fc8-4a65-ae3a-376e1c5b7536 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:14:10.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4137" for this suite.
Jan 23 14:14:16.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:14:16.349: INFO: namespace emptydir-4137 deletion completed in 6.200528086s

• [SLOW TEST:14.468 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:14:16.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1583
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 23 14:14:16.465: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 23 14:14:52.713: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1583 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 14:14:52.713: INFO: >>> kubeConfig: /root/.kube/config
I0123 14:14:52.811632       8 log.go:172] (0xc000834370) (0xc0024d41e0) Create stream
I0123 14:14:52.811748       8 log.go:172] (0xc000834370) (0xc0024d41e0) Stream added, broadcasting: 1
I0123 14:14:52.819276       8 log.go:172] (0xc000834370) Reply frame received for 1
I0123 14:14:52.819311       8 log.go:172] (0xc000834370) (0xc00129e5a0) Create stream
I0123 14:14:52.819322       8 log.go:172] (0xc000834370) (0xc00129e5a0) Stream added, broadcasting: 3
I0123 14:14:52.822631       8 log.go:172] (0xc000834370) Reply frame received for 3
I0123 14:14:52.822656       8 log.go:172] (0xc000834370) (0xc0024d4280) Create stream
I0123 14:14:52.822665       8 log.go:172] (0xc000834370) (0xc0024d4280) Stream added, broadcasting: 5
I0123 14:14:52.825571       8 log.go:172] (0xc000834370) Reply frame received for 5
I0123 14:14:52.971602       8 log.go:172] (0xc000834370) Data frame received for 3
I0123 14:14:52.971697       8 log.go:172] (0xc00129e5a0) (3) Data frame handling
I0123 14:14:52.971724       8 log.go:172] (0xc00129e5a0) (3) Data frame sent
I0123 14:14:53.123409       8 log.go:172] (0xc000834370) Data frame received for 1
I0123 14:14:53.123736       8 log.go:172] (0xc000834370) (0xc00129e5a0) Stream removed, broadcasting: 3
I0123 14:14:53.123911       8 log.go:172] (0xc0024d41e0) (1) Data frame handling
I0123 14:14:53.123966       8 log.go:172] (0xc0024d41e0) (1) Data frame sent
I0123 14:14:53.124036       8 log.go:172] (0xc000834370) (0xc0024d4280) Stream removed, broadcasting: 5
I0123 14:14:53.124096       8 log.go:172] (0xc000834370) (0xc0024d41e0) Stream removed, broadcasting: 1
I0123 14:14:53.124129       8 log.go:172] (0xc000834370) Go away received
I0123 14:14:53.124284       8 log.go:172] (0xc000834370) (0xc0024d41e0) Stream removed, broadcasting: 1
I0123 14:14:53.124323       8 log.go:172] (0xc000834370) (0xc00129e5a0) Stream removed, broadcasting: 3
I0123 14:14:53.124331       8 log.go:172] (0xc000834370) (0xc0024d4280) Stream removed, broadcasting: 5
Jan 23 14:14:53.124: INFO: Found all expected endpoints: [netserver-0]
Jan 23 14:14:53.135: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1583 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 14:14:53.135: INFO: >>> kubeConfig: /root/.kube/config
I0123 14:14:53.196457       8 log.go:172] (0xc000979d90) (0xc001428140) Create stream
I0123 14:14:53.196583       8 log.go:172] (0xc000979d90) (0xc001428140) Stream added, broadcasting: 1
I0123 14:14:53.203328       8 log.go:172] (0xc000979d90) Reply frame received for 1
I0123 14:14:53.203361       8 log.go:172] (0xc000979d90) (0xc0014281e0) Create stream
I0123 14:14:53.203372       8 log.go:172] (0xc000979d90) (0xc0014281e0) Stream added, broadcasting: 3
I0123 14:14:53.207276       8 log.go:172] (0xc000979d90) Reply frame received for 3
I0123 14:14:53.207408       8 log.go:172] (0xc000979d90) (0xc000b066e0) Create stream
I0123 14:14:53.207421       8 log.go:172] (0xc000979d90) (0xc000b066e0) Stream added, broadcasting: 5
I0123 14:14:53.209476       8 log.go:172] (0xc000979d90) Reply frame received for 5
I0123 14:14:53.346282       8 log.go:172] (0xc000979d90) Data frame received for 3
I0123 14:14:53.346450       8 log.go:172] (0xc0014281e0) (3) Data frame handling
I0123 14:14:53.346509       8 log.go:172] (0xc0014281e0) (3) Data frame sent
I0123 14:14:53.512320       8 log.go:172] (0xc000979d90) (0xc0014281e0) Stream removed, broadcasting: 3
I0123 14:14:53.512566       8 log.go:172] (0xc000979d90) Data frame received for 1
I0123 14:14:53.512602       8 log.go:172] (0xc001428140) (1) Data frame handling
I0123 14:14:53.512623       8 log.go:172] (0xc000979d90) (0xc000b066e0) Stream removed, broadcasting: 5
I0123 14:14:53.512670       8 log.go:172] (0xc001428140) (1) Data frame sent
I0123 14:14:53.512684       8 log.go:172] (0xc000979d90) (0xc001428140) Stream removed, broadcasting: 1
I0123 14:14:53.512710       8 log.go:172] (0xc000979d90) Go away received
I0123 14:14:53.512898       8 log.go:172] (0xc000979d90) (0xc001428140) Stream removed, broadcasting: 1
I0123 14:14:53.512906       8 log.go:172] (0xc000979d90) (0xc0014281e0) Stream removed, broadcasting: 3
I0123 14:14:53.512911       8 log.go:172] (0xc000979d90) (0xc000b066e0) Stream removed, broadcasting: 5
Jan 23 14:14:53.512: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:14:53.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1583" for this suite.
Jan 23 14:15:17.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:15:17.661: INFO: namespace pod-network-test-1583 deletion completed in 24.137567883s

• [SLOW TEST:61.310 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:15:17.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 23 14:15:17.797: INFO: namespace kubectl-2182
Jan 23 14:15:17.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2182'
Jan 23 14:15:18.119: INFO: stderr: ""
Jan 23 14:15:18.120: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 23 14:15:19.130: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:15:19.130: INFO: Found 0 / 1
Jan 23 14:15:20.138: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:15:20.138: INFO: Found 0 / 1
Jan 23 14:15:21.134: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:15:21.135: INFO: Found 0 / 1
Jan 23 14:15:22.133: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:15:22.133: INFO: Found 0 / 1
Jan 23 14:15:23.128: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:15:23.129: INFO: Found 0 / 1
Jan 23 14:15:24.139: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:15:24.139: INFO: Found 0 / 1
Jan 23 14:15:25.129: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:15:25.129: INFO: Found 0 / 1
Jan 23 14:15:26.129: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:15:26.129: INFO: Found 0 / 1
Jan 23 14:15:27.127: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:15:27.128: INFO: Found 1 / 1
Jan 23 14:15:27.128: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 23 14:15:27.131: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:15:27.131: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 23 14:15:27.131: INFO: wait on redis-master startup in kubectl-2182 
Jan 23 14:15:27.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xf9qv redis-master --namespace=kubectl-2182'
Jan 23 14:15:27.261: INFO: stderr: ""
Jan 23 14:15:27.262: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 23 Jan 14:15:25.140 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Jan 14:15:25.140 # Server started, Redis version 3.2.12\n1:M 23 Jan 14:15:25.140 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Jan 14:15:25.140 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 23 14:15:27.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2182'
Jan 23 14:15:27.418: INFO: stderr: ""
Jan 23 14:15:27.418: INFO: stdout: "service/rm2 exposed\n"
Jan 23 14:15:27.498: INFO: Service rm2 in namespace kubectl-2182 found.
STEP: exposing service
Jan 23 14:15:29.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2182'
Jan 23 14:15:29.788: INFO: stderr: ""
Jan 23 14:15:29.788: INFO: stdout: "service/rm3 exposed\n"
Jan 23 14:15:29.834: INFO: Service rm3 in namespace kubectl-2182 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:15:31.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2182" for this suite.
Jan 23 14:15:53.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:15:54.030: INFO: namespace kubectl-2182 deletion completed in 22.172468547s

• [SLOW TEST:36.369 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:15:54.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-8128
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-8128
STEP: Deleting pre-stop pod
Jan 23 14:16:15.264: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:16:15.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-8128" for this suite.
Jan 23 14:16:53.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:16:53.660: INFO: namespace prestop-8128 deletion completed in 38.333749248s

• [SLOW TEST:59.629 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:16:53.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 23 14:17:02.450: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1895 pod-service-account-47ac4ef5-bb84-4177-a486-dc429c476d00 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 23 14:17:03.125: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1895 pod-service-account-47ac4ef5-bb84-4177-a486-dc429c476d00 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 23 14:17:03.505: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1895 pod-service-account-47ac4ef5-bb84-4177-a486-dc429c476d00 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:17:04.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1895" for this suite.
Jan 23 14:17:10.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:17:10.274: INFO: namespace svcaccounts-1895 deletion completed in 6.216658109s

• [SLOW TEST:16.614 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:17:10.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 23 14:17:20.977: INFO: Successfully updated pod "labelsupdate5346bdf4-2845-4d83-8d1b-a1a9c9ca2130"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:17:23.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9995" for this suite.
Jan 23 14:17:45.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:17:45.251: INFO: namespace downward-api-9995 deletion completed in 22.150500036s

• [SLOW TEST:34.977 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:17:45.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-e379a36c-5593-416a-aeed-ab73de202ee7
STEP: Creating a pod to test consume configMaps
Jan 23 14:17:45.357: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-15cb2441-7688-412b-a3de-240cf3458b85" in namespace "projected-5548" to be "success or failure"
Jan 23 14:17:45.447: INFO: Pod "pod-projected-configmaps-15cb2441-7688-412b-a3de-240cf3458b85": Phase="Pending", Reason="", readiness=false. Elapsed: 90.203996ms
Jan 23 14:17:47.459: INFO: Pod "pod-projected-configmaps-15cb2441-7688-412b-a3de-240cf3458b85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102514718s
Jan 23 14:17:49.467: INFO: Pod "pod-projected-configmaps-15cb2441-7688-412b-a3de-240cf3458b85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110315953s
Jan 23 14:17:51.477: INFO: Pod "pod-projected-configmaps-15cb2441-7688-412b-a3de-240cf3458b85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120471109s
Jan 23 14:17:53.491: INFO: Pod "pod-projected-configmaps-15cb2441-7688-412b-a3de-240cf3458b85": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134490185s
Jan 23 14:17:55.543: INFO: Pod "pod-projected-configmaps-15cb2441-7688-412b-a3de-240cf3458b85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.186120035s
STEP: Saw pod success
Jan 23 14:17:55.543: INFO: Pod "pod-projected-configmaps-15cb2441-7688-412b-a3de-240cf3458b85" satisfied condition "success or failure"
Jan 23 14:17:55.548: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-15cb2441-7688-412b-a3de-240cf3458b85 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 23 14:17:55.951: INFO: Waiting for pod pod-projected-configmaps-15cb2441-7688-412b-a3de-240cf3458b85 to disappear
Jan 23 14:17:55.961: INFO: Pod pod-projected-configmaps-15cb2441-7688-412b-a3de-240cf3458b85 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:17:55.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5548" for this suite.
Jan 23 14:18:02.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:18:02.218: INFO: namespace projected-5548 deletion completed in 6.246102703s

• [SLOW TEST:16.967 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:18:02.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 14:18:02.319: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b363b3c1-a45e-4bf8-9867-53f609b144c2" in namespace "projected-2726" to be "success or failure"
Jan 23 14:18:02.339: INFO: Pod "downwardapi-volume-b363b3c1-a45e-4bf8-9867-53f609b144c2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.070091ms
Jan 23 14:18:04.349: INFO: Pod "downwardapi-volume-b363b3c1-a45e-4bf8-9867-53f609b144c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029269644s
Jan 23 14:18:06.361: INFO: Pod "downwardapi-volume-b363b3c1-a45e-4bf8-9867-53f609b144c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041858877s
Jan 23 14:18:08.372: INFO: Pod "downwardapi-volume-b363b3c1-a45e-4bf8-9867-53f609b144c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052743762s
Jan 23 14:18:10.380: INFO: Pod "downwardapi-volume-b363b3c1-a45e-4bf8-9867-53f609b144c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060628524s
STEP: Saw pod success
Jan 23 14:18:10.380: INFO: Pod "downwardapi-volume-b363b3c1-a45e-4bf8-9867-53f609b144c2" satisfied condition "success or failure"
Jan 23 14:18:10.383: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b363b3c1-a45e-4bf8-9867-53f609b144c2 container client-container: 
STEP: delete the pod
Jan 23 14:18:10.433: INFO: Waiting for pod downwardapi-volume-b363b3c1-a45e-4bf8-9867-53f609b144c2 to disappear
Jan 23 14:18:10.437: INFO: Pod downwardapi-volume-b363b3c1-a45e-4bf8-9867-53f609b144c2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:18:10.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2726" for this suite.
Jan 23 14:18:16.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:18:16.595: INFO: namespace projected-2726 deletion completed in 6.149795141s

• [SLOW TEST:14.376 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:18:16.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-7283, will wait for the garbage collector to delete the pods
Jan 23 14:18:28.806: INFO: Deleting Job.batch foo took: 10.709688ms
Jan 23 14:18:29.107: INFO: Terminating Job.batch foo pods took: 301.00862ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:19:16.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7283" for this suite.
Jan 23 14:19:22.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:19:22.792: INFO: namespace job-7283 deletion completed in 6.148411486s

• [SLOW TEST:66.196 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:19:22.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0123 14:19:26.335410       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 23 14:19:26.335: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:19:26.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2632" for this suite.
Jan 23 14:19:32.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:19:32.760: INFO: namespace gc-2632 deletion completed in 6.419672577s

• [SLOW TEST:9.968 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:19:32.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:19:32.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3122" for this suite.
Jan 23 14:19:54.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:19:55.192: INFO: namespace pods-3122 deletion completed in 22.28163877s

• [SLOW TEST:22.431 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:19:55.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:19:55.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6502" for this suite.
Jan 23 14:20:01.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:20:01.735: INFO: namespace kubelet-test-6502 deletion completed in 6.165140643s

• [SLOW TEST:6.542 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:20:01.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 23 14:20:01.991: INFO: Number of nodes with available pods: 0
Jan 23 14:20:01.991: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:03.013: INFO: Number of nodes with available pods: 0
Jan 23 14:20:03.013: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:04.156: INFO: Number of nodes with available pods: 0
Jan 23 14:20:04.156: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:05.007: INFO: Number of nodes with available pods: 0
Jan 23 14:20:05.007: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:06.010: INFO: Number of nodes with available pods: 0
Jan 23 14:20:06.010: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:07.010: INFO: Number of nodes with available pods: 0
Jan 23 14:20:07.010: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:09.046: INFO: Number of nodes with available pods: 0
Jan 23 14:20:09.046: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:10.179: INFO: Number of nodes with available pods: 0
Jan 23 14:20:10.179: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:11.025: INFO: Number of nodes with available pods: 0
Jan 23 14:20:11.026: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:12.011: INFO: Number of nodes with available pods: 2
Jan 23 14:20:12.011: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 23 14:20:12.151: INFO: Number of nodes with available pods: 1
Jan 23 14:20:12.151: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:13.173: INFO: Number of nodes with available pods: 1
Jan 23 14:20:13.173: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:14.163: INFO: Number of nodes with available pods: 1
Jan 23 14:20:14.163: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:15.171: INFO: Number of nodes with available pods: 1
Jan 23 14:20:15.172: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:16.167: INFO: Number of nodes with available pods: 1
Jan 23 14:20:16.167: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:17.182: INFO: Number of nodes with available pods: 1
Jan 23 14:20:17.183: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:18.166: INFO: Number of nodes with available pods: 1
Jan 23 14:20:18.166: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:19.166: INFO: Number of nodes with available pods: 1
Jan 23 14:20:19.166: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:20.196: INFO: Number of nodes with available pods: 1
Jan 23 14:20:20.197: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:20:21.171: INFO: Number of nodes with available pods: 2
Jan 23 14:20:21.171: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2150, will wait for the garbage collector to delete the pods
Jan 23 14:20:21.251: INFO: Deleting DaemonSet.extensions daemon-set took: 15.283508ms
Jan 23 14:20:21.552: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.36643ms
Jan 23 14:20:37.963: INFO: Number of nodes with available pods: 0
Jan 23 14:20:37.963: INFO: Number of running nodes: 0, number of available pods: 0
Jan 23 14:20:37.968: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2150/daemonsets","resourceVersion":"21568538"},"items":null}

Jan 23 14:20:37.972: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2150/pods","resourceVersion":"21568538"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:20:37.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2150" for this suite.
Jan 23 14:20:44.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:20:44.120: INFO: namespace daemonsets-2150 deletion completed in 6.124942833s

• [SLOW TEST:42.385 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:20:44.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 23 14:20:44.234: INFO: Waiting up to 5m0s for pod "pod-ff49b1fe-226d-4726-81e9-b99b028adf4b" in namespace "emptydir-6266" to be "success or failure"
Jan 23 14:20:44.254: INFO: Pod "pod-ff49b1fe-226d-4726-81e9-b99b028adf4b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.900011ms
Jan 23 14:20:46.261: INFO: Pod "pod-ff49b1fe-226d-4726-81e9-b99b028adf4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026937076s
Jan 23 14:20:48.274: INFO: Pod "pod-ff49b1fe-226d-4726-81e9-b99b028adf4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039760871s
Jan 23 14:20:50.306: INFO: Pod "pod-ff49b1fe-226d-4726-81e9-b99b028adf4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072309499s
Jan 23 14:20:52.324: INFO: Pod "pod-ff49b1fe-226d-4726-81e9-b99b028adf4b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089688414s
Jan 23 14:20:54.332: INFO: Pod "pod-ff49b1fe-226d-4726-81e9-b99b028adf4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098497754s
STEP: Saw pod success
Jan 23 14:20:54.333: INFO: Pod "pod-ff49b1fe-226d-4726-81e9-b99b028adf4b" satisfied condition "success or failure"
Jan 23 14:20:54.339: INFO: Trying to get logs from node iruya-node pod pod-ff49b1fe-226d-4726-81e9-b99b028adf4b container test-container: 
STEP: delete the pod
Jan 23 14:20:54.411: INFO: Waiting for pod pod-ff49b1fe-226d-4726-81e9-b99b028adf4b to disappear
Jan 23 14:20:54.441: INFO: Pod pod-ff49b1fe-226d-4726-81e9-b99b028adf4b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:20:54.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6266" for this suite.
Jan 23 14:21:00.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:21:00.629: INFO: namespace emptydir-6266 deletion completed in 6.180153096s

• [SLOW TEST:16.508 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:21:00.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-9837/secret-test-21ad4b00-23d7-4ce0-abe5-8e17da83ad91
STEP: Creating a pod to test consume secrets
Jan 23 14:21:01.004: INFO: Waiting up to 5m0s for pod "pod-configmaps-ce80fc93-e816-4029-96ee-86aa1656c088" in namespace "secrets-9837" to be "success or failure"
Jan 23 14:21:01.010: INFO: Pod "pod-configmaps-ce80fc93-e816-4029-96ee-86aa1656c088": Phase="Pending", Reason="", readiness=false. Elapsed: 5.653759ms
Jan 23 14:21:03.018: INFO: Pod "pod-configmaps-ce80fc93-e816-4029-96ee-86aa1656c088": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013624007s
Jan 23 14:21:05.026: INFO: Pod "pod-configmaps-ce80fc93-e816-4029-96ee-86aa1656c088": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022202167s
Jan 23 14:21:07.037: INFO: Pod "pod-configmaps-ce80fc93-e816-4029-96ee-86aa1656c088": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033151265s
Jan 23 14:21:09.046: INFO: Pod "pod-configmaps-ce80fc93-e816-4029-96ee-86aa1656c088": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041652345s
STEP: Saw pod success
Jan 23 14:21:09.046: INFO: Pod "pod-configmaps-ce80fc93-e816-4029-96ee-86aa1656c088" satisfied condition "success or failure"
Jan 23 14:21:09.051: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ce80fc93-e816-4029-96ee-86aa1656c088 container env-test: 
STEP: delete the pod
Jan 23 14:21:09.097: INFO: Waiting for pod pod-configmaps-ce80fc93-e816-4029-96ee-86aa1656c088 to disappear
Jan 23 14:21:09.115: INFO: Pod pod-configmaps-ce80fc93-e816-4029-96ee-86aa1656c088 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:21:09.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9837" for this suite.
Jan 23 14:21:15.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:21:15.365: INFO: namespace secrets-9837 deletion completed in 6.243495726s

• [SLOW TEST:14.736 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:21:15.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-66464568-0113-4af0-ad0d-ec515ed61aa7
STEP: Creating a pod to test consume configMaps
Jan 23 14:21:15.442: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-db971f4e-5abe-4897-80ea-246dd8e4a25b" in namespace "projected-7318" to be "success or failure"
Jan 23 14:21:15.454: INFO: Pod "pod-projected-configmaps-db971f4e-5abe-4897-80ea-246dd8e4a25b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.859433ms
Jan 23 14:21:17.463: INFO: Pod "pod-projected-configmaps-db971f4e-5abe-4897-80ea-246dd8e4a25b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021167369s
Jan 23 14:21:19.467: INFO: Pod "pod-projected-configmaps-db971f4e-5abe-4897-80ea-246dd8e4a25b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025774342s
Jan 23 14:21:21.478: INFO: Pod "pod-projected-configmaps-db971f4e-5abe-4897-80ea-246dd8e4a25b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036544017s
Jan 23 14:21:23.490: INFO: Pod "pod-projected-configmaps-db971f4e-5abe-4897-80ea-246dd8e4a25b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048578548s
STEP: Saw pod success
Jan 23 14:21:23.490: INFO: Pod "pod-projected-configmaps-db971f4e-5abe-4897-80ea-246dd8e4a25b" satisfied condition "success or failure"
Jan 23 14:21:23.497: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-db971f4e-5abe-4897-80ea-246dd8e4a25b container projected-configmap-volume-test: 
STEP: delete the pod
Jan 23 14:21:23.547: INFO: Waiting for pod pod-projected-configmaps-db971f4e-5abe-4897-80ea-246dd8e4a25b to disappear
Jan 23 14:21:23.552: INFO: Pod pod-projected-configmaps-db971f4e-5abe-4897-80ea-246dd8e4a25b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:21:23.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7318" for this suite.
Jan 23 14:21:29.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:21:29.719: INFO: namespace projected-7318 deletion completed in 6.160364181s

• [SLOW TEST:14.354 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:21:29.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2659
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 23 14:21:29.896: INFO: Found 0 stateful pods, waiting for 3
Jan 23 14:21:39.903: INFO: Found 2 stateful pods, waiting for 3
Jan 23 14:21:49.905: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 14:21:49.905: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 14:21:49.905: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 23 14:21:59.905: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 14:21:59.905: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 14:21:59.905: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 23 14:21:59.943: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 23 14:22:10.013: INFO: Updating stateful set ss2
Jan 23 14:22:10.056: INFO: Waiting for Pod statefulset-2659/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 23 14:22:20.069: INFO: Waiting for Pod statefulset-2659/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 23 14:22:30.558: INFO: Found 2 stateful pods, waiting for 3
Jan 23 14:22:40.581: INFO: Found 2 stateful pods, waiting for 3
Jan 23 14:22:50.613: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 14:22:50.613: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 14:22:50.613: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 23 14:22:50.666: INFO: Updating stateful set ss2
Jan 23 14:22:50.811: INFO: Waiting for Pod statefulset-2659/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 23 14:23:00.845: INFO: Updating stateful set ss2
Jan 23 14:23:00.864: INFO: Waiting for StatefulSet statefulset-2659/ss2 to complete update
Jan 23 14:23:00.864: INFO: Waiting for Pod statefulset-2659/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 23 14:23:10.883: INFO: Waiting for StatefulSet statefulset-2659/ss2 to complete update
Jan 23 14:23:10.883: INFO: Waiting for Pod statefulset-2659/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 23 14:23:20.880: INFO: Deleting all statefulset in ns statefulset-2659
Jan 23 14:23:20.884: INFO: Scaling statefulset ss2 to 0
Jan 23 14:24:00.965: INFO: Waiting for statefulset status.replicas updated to 0
Jan 23 14:24:01.003: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:24:01.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2659" for this suite.
Jan 23 14:24:09.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:24:09.179: INFO: namespace statefulset-2659 deletion completed in 8.139914634s

• [SLOW TEST:159.459 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:24:09.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 23 14:24:09.287: INFO: Waiting up to 5m0s for pod "pod-b2cf267d-7704-4ad1-8ffe-eaf57fa29c34" in namespace "emptydir-6778" to be "success or failure"
Jan 23 14:24:09.297: INFO: Pod "pod-b2cf267d-7704-4ad1-8ffe-eaf57fa29c34": Phase="Pending", Reason="", readiness=false. Elapsed: 9.746167ms
Jan 23 14:24:11.315: INFO: Pod "pod-b2cf267d-7704-4ad1-8ffe-eaf57fa29c34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028096324s
Jan 23 14:24:13.329: INFO: Pod "pod-b2cf267d-7704-4ad1-8ffe-eaf57fa29c34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041761343s
Jan 23 14:24:15.556: INFO: Pod "pod-b2cf267d-7704-4ad1-8ffe-eaf57fa29c34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26874754s
Jan 23 14:24:17.572: INFO: Pod "pod-b2cf267d-7704-4ad1-8ffe-eaf57fa29c34": Phase="Pending", Reason="", readiness=false. Elapsed: 8.284835292s
Jan 23 14:24:19.585: INFO: Pod "pod-b2cf267d-7704-4ad1-8ffe-eaf57fa29c34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.297988861s
STEP: Saw pod success
Jan 23 14:24:19.585: INFO: Pod "pod-b2cf267d-7704-4ad1-8ffe-eaf57fa29c34" satisfied condition "success or failure"
Jan 23 14:24:19.589: INFO: Trying to get logs from node iruya-node pod pod-b2cf267d-7704-4ad1-8ffe-eaf57fa29c34 container test-container: 
STEP: delete the pod
Jan 23 14:24:19.695: INFO: Waiting for pod pod-b2cf267d-7704-4ad1-8ffe-eaf57fa29c34 to disappear
Jan 23 14:24:19.704: INFO: Pod pod-b2cf267d-7704-4ad1-8ffe-eaf57fa29c34 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:24:19.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6778" for this suite.
Jan 23 14:24:25.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:24:25.989: INFO: namespace emptydir-6778 deletion completed in 6.27690691s

• [SLOW TEST:16.810 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:24:25.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-06509cef-30ae-47f3-8e2a-2c3c0087ef1e
STEP: Creating a pod to test consume secrets
Jan 23 14:24:26.231: INFO: Waiting up to 5m0s for pod "pod-secrets-b179b9fa-9c72-45db-9227-35bdcfac200d" in namespace "secrets-8051" to be "success or failure"
Jan 23 14:24:26.235: INFO: Pod "pod-secrets-b179b9fa-9c72-45db-9227-35bdcfac200d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083174ms
Jan 23 14:24:28.246: INFO: Pod "pod-secrets-b179b9fa-9c72-45db-9227-35bdcfac200d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014698552s
Jan 23 14:24:30.255: INFO: Pod "pod-secrets-b179b9fa-9c72-45db-9227-35bdcfac200d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023714143s
Jan 23 14:24:32.279: INFO: Pod "pod-secrets-b179b9fa-9c72-45db-9227-35bdcfac200d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048152871s
Jan 23 14:24:34.287: INFO: Pod "pod-secrets-b179b9fa-9c72-45db-9227-35bdcfac200d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05578796s
STEP: Saw pod success
Jan 23 14:24:34.287: INFO: Pod "pod-secrets-b179b9fa-9c72-45db-9227-35bdcfac200d" satisfied condition "success or failure"
Jan 23 14:24:34.289: INFO: Trying to get logs from node iruya-node pod pod-secrets-b179b9fa-9c72-45db-9227-35bdcfac200d container secret-volume-test: 
STEP: delete the pod
Jan 23 14:24:34.371: INFO: Waiting for pod pod-secrets-b179b9fa-9c72-45db-9227-35bdcfac200d to disappear
Jan 23 14:24:34.379: INFO: Pod pod-secrets-b179b9fa-9c72-45db-9227-35bdcfac200d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:24:34.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8051" for this suite.
Jan 23 14:24:40.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:24:40.620: INFO: namespace secrets-8051 deletion completed in 6.198764916s

• [SLOW TEST:14.631 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:24:40.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-3c2a83b5-7118-4650-bf4d-e400d1aa2549
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-3c2a83b5-7118-4650-bf4d-e400d1aa2549
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:26:08.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6265" for this suite.
Jan 23 14:26:30.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:26:31.091: INFO: namespace configmap-6265 deletion completed in 22.150168645s

• [SLOW TEST:110.470 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:26:31.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 14:26:31.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e167a466-d73b-459d-a822-05a5bcd6483e" in namespace "downward-api-819" to be "success or failure"
Jan 23 14:26:31.223: INFO: Pod "downwardapi-volume-e167a466-d73b-459d-a822-05a5bcd6483e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.511393ms
Jan 23 14:26:33.235: INFO: Pod "downwardapi-volume-e167a466-d73b-459d-a822-05a5bcd6483e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01766083s
Jan 23 14:26:35.250: INFO: Pod "downwardapi-volume-e167a466-d73b-459d-a822-05a5bcd6483e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033061567s
Jan 23 14:26:37.257: INFO: Pod "downwardapi-volume-e167a466-d73b-459d-a822-05a5bcd6483e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039692227s
Jan 23 14:26:39.266: INFO: Pod "downwardapi-volume-e167a466-d73b-459d-a822-05a5bcd6483e": Phase="Running", Reason="", readiness=true. Elapsed: 8.048555213s
Jan 23 14:26:41.278: INFO: Pod "downwardapi-volume-e167a466-d73b-459d-a822-05a5bcd6483e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060684251s
STEP: Saw pod success
Jan 23 14:26:41.278: INFO: Pod "downwardapi-volume-e167a466-d73b-459d-a822-05a5bcd6483e" satisfied condition "success or failure"
Jan 23 14:26:41.283: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e167a466-d73b-459d-a822-05a5bcd6483e container client-container: 
STEP: delete the pod
Jan 23 14:26:41.333: INFO: Waiting for pod downwardapi-volume-e167a466-d73b-459d-a822-05a5bcd6483e to disappear
Jan 23 14:26:41.339: INFO: Pod downwardapi-volume-e167a466-d73b-459d-a822-05a5bcd6483e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:26:41.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-819" for this suite.
Jan 23 14:26:47.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:26:47.582: INFO: namespace downward-api-819 deletion completed in 6.236707773s

• [SLOW TEST:16.491 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:26:47.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 14:27:15.702: INFO: Container started at 2020-01-23 14:26:54 +0000 UTC, pod became ready at 2020-01-23 14:27:15 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:27:15.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-936" for this suite.
Jan 23 14:27:37.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:27:37.906: INFO: namespace container-probe-936 deletion completed in 22.195105358s

• [SLOW TEST:50.324 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:27:37.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 14:27:38.373: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 23 14:27:38.395: INFO: Number of nodes with available pods: 0
Jan 23 14:27:38.395: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 23 14:27:38.566: INFO: Number of nodes with available pods: 0
Jan 23 14:27:38.566: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:39.579: INFO: Number of nodes with available pods: 0
Jan 23 14:27:39.579: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:40.581: INFO: Number of nodes with available pods: 0
Jan 23 14:27:40.581: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:41.575: INFO: Number of nodes with available pods: 0
Jan 23 14:27:41.575: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:42.586: INFO: Number of nodes with available pods: 0
Jan 23 14:27:42.586: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:43.576: INFO: Number of nodes with available pods: 0
Jan 23 14:27:43.576: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:44.581: INFO: Number of nodes with available pods: 0
Jan 23 14:27:44.581: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:45.584: INFO: Number of nodes with available pods: 1
Jan 23 14:27:45.584: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 23 14:27:45.626: INFO: Number of nodes with available pods: 1
Jan 23 14:27:45.626: INFO: Number of running nodes: 0, number of available pods: 1
Jan 23 14:27:46.633: INFO: Number of nodes with available pods: 0
Jan 23 14:27:46.634: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 23 14:27:46.668: INFO: Number of nodes with available pods: 0
Jan 23 14:27:46.668: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:47.678: INFO: Number of nodes with available pods: 0
Jan 23 14:27:47.678: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:49.040: INFO: Number of nodes with available pods: 0
Jan 23 14:27:49.040: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:49.681: INFO: Number of nodes with available pods: 0
Jan 23 14:27:49.682: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:50.675: INFO: Number of nodes with available pods: 0
Jan 23 14:27:50.676: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:51.675: INFO: Number of nodes with available pods: 0
Jan 23 14:27:51.675: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:52.673: INFO: Number of nodes with available pods: 0
Jan 23 14:27:52.673: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:53.675: INFO: Number of nodes with available pods: 0
Jan 23 14:27:53.675: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:54.677: INFO: Number of nodes with available pods: 0
Jan 23 14:27:54.677: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:55.675: INFO: Number of nodes with available pods: 0
Jan 23 14:27:55.676: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:56.678: INFO: Number of nodes with available pods: 0
Jan 23 14:27:56.678: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:57.682: INFO: Number of nodes with available pods: 0
Jan 23 14:27:57.683: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:58.685: INFO: Number of nodes with available pods: 0
Jan 23 14:27:58.685: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:27:59.675: INFO: Number of nodes with available pods: 0
Jan 23 14:27:59.675: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:28:00.693: INFO: Number of nodes with available pods: 0
Jan 23 14:28:00.693: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:28:01.681: INFO: Number of nodes with available pods: 0
Jan 23 14:28:01.681: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:28:02.675: INFO: Number of nodes with available pods: 0
Jan 23 14:28:02.675: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:28:03.715: INFO: Number of nodes with available pods: 1
Jan 23 14:28:03.716: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1819, will wait for the garbage collector to delete the pods
Jan 23 14:28:03.813: INFO: Deleting DaemonSet.extensions daemon-set took: 25.23528ms
Jan 23 14:28:04.114: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.78572ms
Jan 23 14:28:16.633: INFO: Number of nodes with available pods: 0
Jan 23 14:28:16.634: INFO: Number of running nodes: 0, number of available pods: 0
Jan 23 14:28:16.641: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1819/daemonsets","resourceVersion":"21569677"},"items":null}

Jan 23 14:28:16.644: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1819/pods","resourceVersion":"21569677"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:28:16.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1819" for this suite.
Jan 23 14:28:22.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:28:22.786: INFO: namespace daemonsets-1819 deletion completed in 6.093843659s

• [SLOW TEST:44.880 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:28:22.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 23 14:28:22.845: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:28:46.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6345" for this suite.
Jan 23 14:28:52.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:28:52.880: INFO: namespace pods-6345 deletion completed in 6.191773418s

• [SLOW TEST:30.093 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:28:52.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-047e78a8-a7cb-48e8-a85f-bb21650eeb70 in namespace container-probe-2081
Jan 23 14:29:00.987: INFO: Started pod test-webserver-047e78a8-a7cb-48e8-a85f-bb21650eeb70 in namespace container-probe-2081
STEP: checking the pod's current state and verifying that restartCount is present
Jan 23 14:29:00.994: INFO: Initial restart count of pod test-webserver-047e78a8-a7cb-48e8-a85f-bb21650eeb70 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:33:01.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2081" for this suite.
Jan 23 14:33:07.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:33:07.319: INFO: namespace container-probe-2081 deletion completed in 6.223906635s

• [SLOW TEST:254.440 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:33:07.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 14:33:07.437: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e0cf6d2-adac-4d99-ab07-2d456fdd59fe" in namespace "projected-8423" to be "success or failure"
Jan 23 14:33:07.459: INFO: Pod "downwardapi-volume-4e0cf6d2-adac-4d99-ab07-2d456fdd59fe": Phase="Pending", Reason="", readiness=false. Elapsed: 21.976687ms
Jan 23 14:33:09.469: INFO: Pod "downwardapi-volume-4e0cf6d2-adac-4d99-ab07-2d456fdd59fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032528663s
Jan 23 14:33:11.478: INFO: Pod "downwardapi-volume-4e0cf6d2-adac-4d99-ab07-2d456fdd59fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04144317s
Jan 23 14:33:13.489: INFO: Pod "downwardapi-volume-4e0cf6d2-adac-4d99-ab07-2d456fdd59fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051947265s
Jan 23 14:33:15.498: INFO: Pod "downwardapi-volume-4e0cf6d2-adac-4d99-ab07-2d456fdd59fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061576672s
STEP: Saw pod success
Jan 23 14:33:15.499: INFO: Pod "downwardapi-volume-4e0cf6d2-adac-4d99-ab07-2d456fdd59fe" satisfied condition "success or failure"
Jan 23 14:33:15.503: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4e0cf6d2-adac-4d99-ab07-2d456fdd59fe container client-container: 
STEP: delete the pod
Jan 23 14:33:15.562: INFO: Waiting for pod downwardapi-volume-4e0cf6d2-adac-4d99-ab07-2d456fdd59fe to disappear
Jan 23 14:33:15.576: INFO: Pod downwardapi-volume-4e0cf6d2-adac-4d99-ab07-2d456fdd59fe no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:33:15.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8423" for this suite.
Jan 23 14:33:21.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:33:21.749: INFO: namespace projected-8423 deletion completed in 6.166298675s

• [SLOW TEST:14.429 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:33:21.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 23 14:33:21.915: INFO: Waiting up to 5m0s for pod "pod-050e4d8e-8578-4299-890e-184400ea9cd8" in namespace "emptydir-8824" to be "success or failure"
Jan 23 14:33:21.966: INFO: Pod "pod-050e4d8e-8578-4299-890e-184400ea9cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 50.942275ms
Jan 23 14:33:23.976: INFO: Pod "pod-050e4d8e-8578-4299-890e-184400ea9cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060464854s
Jan 23 14:33:25.988: INFO: Pod "pod-050e4d8e-8578-4299-890e-184400ea9cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072410866s
Jan 23 14:33:27.998: INFO: Pod "pod-050e4d8e-8578-4299-890e-184400ea9cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082702141s
Jan 23 14:33:30.008: INFO: Pod "pod-050e4d8e-8578-4299-890e-184400ea9cd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093006742s
STEP: Saw pod success
Jan 23 14:33:30.009: INFO: Pod "pod-050e4d8e-8578-4299-890e-184400ea9cd8" satisfied condition "success or failure"
Jan 23 14:33:30.012: INFO: Trying to get logs from node iruya-node pod pod-050e4d8e-8578-4299-890e-184400ea9cd8 container test-container: 
STEP: delete the pod
Jan 23 14:33:30.047: INFO: Waiting for pod pod-050e4d8e-8578-4299-890e-184400ea9cd8 to disappear
Jan 23 14:33:30.052: INFO: Pod pod-050e4d8e-8578-4299-890e-184400ea9cd8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:33:30.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8824" for this suite.
Jan 23 14:33:36.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:33:36.341: INFO: namespace emptydir-8824 deletion completed in 6.284345634s

• [SLOW TEST:14.592 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:33:36.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan 23 14:33:36.820: INFO: Waiting up to 5m0s for pod "var-expansion-6bf1d934-15a5-4fae-acd1-424cc10da97b" in namespace "var-expansion-4957" to be "success or failure"
Jan 23 14:33:36.849: INFO: Pod "var-expansion-6bf1d934-15a5-4fae-acd1-424cc10da97b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.830358ms
Jan 23 14:33:38.866: INFO: Pod "var-expansion-6bf1d934-15a5-4fae-acd1-424cc10da97b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04635307s
Jan 23 14:33:40.888: INFO: Pod "var-expansion-6bf1d934-15a5-4fae-acd1-424cc10da97b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067923405s
Jan 23 14:33:42.900: INFO: Pod "var-expansion-6bf1d934-15a5-4fae-acd1-424cc10da97b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080181151s
Jan 23 14:33:44.918: INFO: Pod "var-expansion-6bf1d934-15a5-4fae-acd1-424cc10da97b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098273327s
STEP: Saw pod success
Jan 23 14:33:44.919: INFO: Pod "var-expansion-6bf1d934-15a5-4fae-acd1-424cc10da97b" satisfied condition "success or failure"
Jan 23 14:33:44.929: INFO: Trying to get logs from node iruya-node pod var-expansion-6bf1d934-15a5-4fae-acd1-424cc10da97b container dapi-container: 
STEP: delete the pod
Jan 23 14:33:45.022: INFO: Waiting for pod var-expansion-6bf1d934-15a5-4fae-acd1-424cc10da97b to disappear
Jan 23 14:33:45.038: INFO: Pod var-expansion-6bf1d934-15a5-4fae-acd1-424cc10da97b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:33:45.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4957" for this suite.
Jan 23 14:33:51.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:33:51.429: INFO: namespace var-expansion-4957 deletion completed in 6.347429668s

• [SLOW TEST:15.088 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:33:51.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-ca99afba-f97f-4d03-9f16-3b81fb5b7d0a
STEP: Creating a pod to test consume configMaps
Jan 23 14:33:51.593: INFO: Waiting up to 5m0s for pod "pod-configmaps-c13df73c-7262-4576-be55-a67191c1f408" in namespace "configmap-9539" to be "success or failure"
Jan 23 14:33:51.697: INFO: Pod "pod-configmaps-c13df73c-7262-4576-be55-a67191c1f408": Phase="Pending", Reason="", readiness=false. Elapsed: 103.268279ms
Jan 23 14:33:53.711: INFO: Pod "pod-configmaps-c13df73c-7262-4576-be55-a67191c1f408": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116883963s
Jan 23 14:33:55.722: INFO: Pod "pod-configmaps-c13df73c-7262-4576-be55-a67191c1f408": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1280745s
Jan 23 14:33:57.729: INFO: Pod "pod-configmaps-c13df73c-7262-4576-be55-a67191c1f408": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135249328s
Jan 23 14:33:59.738: INFO: Pod "pod-configmaps-c13df73c-7262-4576-be55-a67191c1f408": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.144446316s
STEP: Saw pod success
Jan 23 14:33:59.738: INFO: Pod "pod-configmaps-c13df73c-7262-4576-be55-a67191c1f408" satisfied condition "success or failure"
Jan 23 14:33:59.743: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c13df73c-7262-4576-be55-a67191c1f408 container configmap-volume-test: 
STEP: delete the pod
Jan 23 14:34:00.032: INFO: Waiting for pod pod-configmaps-c13df73c-7262-4576-be55-a67191c1f408 to disappear
Jan 23 14:34:00.037: INFO: Pod pod-configmaps-c13df73c-7262-4576-be55-a67191c1f408 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:34:00.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9539" for this suite.
Jan 23 14:34:06.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:34:06.198: INFO: namespace configmap-9539 deletion completed in 6.156040249s

• [SLOW TEST:14.768 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:34:06.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 23 14:34:15.501: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:34:15.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3818" for this suite.
Jan 23 14:34:21.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:34:21.710: INFO: namespace container-runtime-3818 deletion completed in 6.175296932s

• [SLOW TEST:15.512 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:34:21.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-db1fc2c1-cbee-47d8-bd05-33488627c933
STEP: Creating a pod to test consume secrets
Jan 23 14:34:21.893: INFO: Waiting up to 5m0s for pod "pod-secrets-22acc166-ba77-4b78-b6a8-f997d8b28ffd" in namespace "secrets-4872" to be "success or failure"
Jan 23 14:34:21.969: INFO: Pod "pod-secrets-22acc166-ba77-4b78-b6a8-f997d8b28ffd": Phase="Pending", Reason="", readiness=false. Elapsed: 76.245371ms
Jan 23 14:34:23.977: INFO: Pod "pod-secrets-22acc166-ba77-4b78-b6a8-f997d8b28ffd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084330185s
Jan 23 14:34:26.004: INFO: Pod "pod-secrets-22acc166-ba77-4b78-b6a8-f997d8b28ffd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11101606s
Jan 23 14:34:28.014: INFO: Pod "pod-secrets-22acc166-ba77-4b78-b6a8-f997d8b28ffd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120908742s
Jan 23 14:34:30.030: INFO: Pod "pod-secrets-22acc166-ba77-4b78-b6a8-f997d8b28ffd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.137311175s
STEP: Saw pod success
Jan 23 14:34:30.030: INFO: Pod "pod-secrets-22acc166-ba77-4b78-b6a8-f997d8b28ffd" satisfied condition "success or failure"
Jan 23 14:34:30.035: INFO: Trying to get logs from node iruya-node pod pod-secrets-22acc166-ba77-4b78-b6a8-f997d8b28ffd container secret-volume-test: 
STEP: delete the pod
Jan 23 14:34:30.133: INFO: Waiting for pod pod-secrets-22acc166-ba77-4b78-b6a8-f997d8b28ffd to disappear
Jan 23 14:34:30.140: INFO: Pod pod-secrets-22acc166-ba77-4b78-b6a8-f997d8b28ffd no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:34:30.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4872" for this suite.
Jan 23 14:34:36.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:34:36.312: INFO: namespace secrets-4872 deletion completed in 6.16420834s

• [SLOW TEST:14.602 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:34:36.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 14:34:36.479: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 23 14:34:36.511: INFO: Number of nodes with available pods: 0
Jan 23 14:34:36.511: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:34:37.527: INFO: Number of nodes with available pods: 0
Jan 23 14:34:37.527: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:34:39.429: INFO: Number of nodes with available pods: 0
Jan 23 14:34:39.429: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:34:39.527: INFO: Number of nodes with available pods: 0
Jan 23 14:34:39.527: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:34:41.001: INFO: Number of nodes with available pods: 0
Jan 23 14:34:41.001: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:34:41.525: INFO: Number of nodes with available pods: 0
Jan 23 14:34:41.526: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:34:42.543: INFO: Number of nodes with available pods: 0
Jan 23 14:34:42.543: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:34:44.682: INFO: Number of nodes with available pods: 0
Jan 23 14:34:44.682: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:34:45.552: INFO: Number of nodes with available pods: 1
Jan 23 14:34:45.552: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:34:46.531: INFO: Number of nodes with available pods: 1
Jan 23 14:34:46.531: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:34:47.537: INFO: Number of nodes with available pods: 1
Jan 23 14:34:47.537: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 23 14:34:48.536: INFO: Number of nodes with available pods: 2
Jan 23 14:34:48.536: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 23 14:34:48.695: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:48.696: INFO: Wrong image for pod: daemon-set-59wj8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:49.741: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:49.741: INFO: Wrong image for pod: daemon-set-59wj8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:50.738: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:50.738: INFO: Wrong image for pod: daemon-set-59wj8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:51.729: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:51.729: INFO: Wrong image for pod: daemon-set-59wj8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:52.728: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:52.729: INFO: Wrong image for pod: daemon-set-59wj8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:53.731: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:53.731: INFO: Wrong image for pod: daemon-set-59wj8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:53.731: INFO: Pod daemon-set-59wj8 is not available
Jan 23 14:34:54.754: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:54.754: INFO: Pod daemon-set-kklxh is not available
Jan 23 14:34:55.789: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:55.789: INFO: Pod daemon-set-kklxh is not available
Jan 23 14:34:56.728: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:56.728: INFO: Pod daemon-set-kklxh is not available
Jan 23 14:34:57.727: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:57.727: INFO: Pod daemon-set-kklxh is not available
Jan 23 14:34:58.754: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:58.755: INFO: Pod daemon-set-kklxh is not available
Jan 23 14:34:59.729: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:34:59.729: INFO: Pod daemon-set-kklxh is not available
Jan 23 14:35:00.726: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:35:00.726: INFO: Pod daemon-set-kklxh is not available
Jan 23 14:35:01.731: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:35:02.731: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:35:03.736: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:35:04.728: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:35:05.725: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:35:06.727: INFO: Wrong image for pod: daemon-set-2569w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 23 14:35:06.727: INFO: Pod daemon-set-2569w is not available
Jan 23 14:35:07.728: INFO: Pod daemon-set-5k6kk is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 23 14:35:07.744: INFO: Number of nodes with available pods: 1
Jan 23 14:35:07.744: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:35:08.756: INFO: Number of nodes with available pods: 1
Jan 23 14:35:08.756: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:35:09.757: INFO: Number of nodes with available pods: 1
Jan 23 14:35:09.757: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:35:10.782: INFO: Number of nodes with available pods: 1
Jan 23 14:35:10.782: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:35:11.774: INFO: Number of nodes with available pods: 1
Jan 23 14:35:11.774: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:35:12.783: INFO: Number of nodes with available pods: 1
Jan 23 14:35:12.783: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:35:13.765: INFO: Number of nodes with available pods: 1
Jan 23 14:35:13.766: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:35:14.772: INFO: Number of nodes with available pods: 1
Jan 23 14:35:14.772: INFO: Node iruya-node is running more than one daemon pod
Jan 23 14:35:15.763: INFO: Number of nodes with available pods: 2
Jan 23 14:35:15.763: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3200, will wait for the garbage collector to delete the pods
Jan 23 14:35:15.885: INFO: Deleting DaemonSet.extensions daemon-set took: 15.102927ms
Jan 23 14:35:16.186: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.537704ms
Jan 23 14:35:23.091: INFO: Number of nodes with available pods: 0
Jan 23 14:35:23.091: INFO: Number of running nodes: 0, number of available pods: 0
Jan 23 14:35:23.095: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3200/daemonsets","resourceVersion":"21570534"},"items":null}

Jan 23 14:35:23.098: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3200/pods","resourceVersion":"21570534"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:35:23.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3200" for this suite.
Jan 23 14:35:29.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:35:29.293: INFO: namespace daemonsets-3200 deletion completed in 6.181976985s

• [SLOW TEST:52.981 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:35:29.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 23 14:35:29.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-2539'
Jan 23 14:35:31.298: INFO: stderr: ""
Jan 23 14:35:31.298: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 23 14:35:41.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-2539 -o json'
Jan 23 14:35:41.453: INFO: stderr: ""
Jan 23 14:35:41.453: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-23T14:35:31Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-2539\",\n        \"resourceVersion\": \"21570597\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-2539/pods/e2e-test-nginx-pod\",\n        \"uid\": \"d42fa504-8aad-4a14-9b4c-d6534e5c3656\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-c498m\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-c498m\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-c498m\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-23T14:35:31Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-23T14:35:38Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-23T14:35:38Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-23T14:35:31Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://b995278dd50af981861aae05be0c9b6668f3706a979f33a847ebd02bd140fb19\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-23T14:35:37Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-23T14:35:31Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 23 14:35:41.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2539'
Jan 23 14:35:41.918: INFO: stderr: ""
Jan 23 14:35:41.918: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan 23 14:35:41.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2539'
Jan 23 14:35:49.601: INFO: stderr: ""
Jan 23 14:35:49.602: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:35:49.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2539" for this suite.
Jan 23 14:35:55.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:35:55.792: INFO: namespace kubectl-2539 deletion completed in 6.179026334s

• [SLOW TEST:26.498 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:35:55.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 23 14:35:55.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2877'
Jan 23 14:35:56.076: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 23 14:35:56.076: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 23 14:35:56.095: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 23 14:35:56.106: INFO: scanned /root for discovery docs: 
Jan 23 14:35:56.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2877'
Jan 23 14:36:18.301: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 23 14:36:18.301: INFO: stdout: "Created e2e-test-nginx-rc-d042210c7508368404c63115014a465b\nScaling up e2e-test-nginx-rc-d042210c7508368404c63115014a465b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d042210c7508368404c63115014a465b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d042210c7508368404c63115014a465b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 23 14:36:18.301: INFO: stdout: "Created e2e-test-nginx-rc-d042210c7508368404c63115014a465b\nScaling up e2e-test-nginx-rc-d042210c7508368404c63115014a465b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d042210c7508368404c63115014a465b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d042210c7508368404c63115014a465b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 23 14:36:18.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:36:18.496: INFO: stderr: ""
Jan 23 14:36:18.496: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:36:23.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:36:23.705: INFO: stderr: ""
Jan 23 14:36:23.705: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:36:28.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:36:28.860: INFO: stderr: ""
Jan 23 14:36:28.860: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:36:33.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:36:34.087: INFO: stderr: ""
Jan 23 14:36:34.087: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:36:39.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:36:39.246: INFO: stderr: ""
Jan 23 14:36:39.246: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:36:44.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:36:44.363: INFO: stderr: ""
Jan 23 14:36:44.363: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:36:49.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:36:49.542: INFO: stderr: ""
Jan 23 14:36:49.543: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:36:54.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:36:54.664: INFO: stderr: ""
Jan 23 14:36:54.664: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:36:59.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:36:59.865: INFO: stderr: ""
Jan 23 14:36:59.865: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:37:04.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:37:05.029: INFO: stderr: ""
Jan 23 14:37:05.029: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:37:10.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:37:10.187: INFO: stderr: ""
Jan 23 14:37:10.187: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:37:15.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:37:15.414: INFO: stderr: ""
Jan 23 14:37:15.414: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:37:20.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:37:20.673: INFO: stderr: ""
Jan 23 14:37:20.674: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:37:25.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:37:25.838: INFO: stderr: ""
Jan 23 14:37:25.838: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:37:30.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:37:30.946: INFO: stderr: ""
Jan 23 14:37:30.947: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:37:35.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:37:36.121: INFO: stderr: ""
Jan 23 14:37:36.121: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf e2e-test-nginx-rc-mmh2w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 23 14:37:41.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:37:41.276: INFO: stderr: ""
Jan 23 14:37:41.276: INFO: stdout: "e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf "
Jan 23 14:37:41.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2877'
Jan 23 14:37:41.368: INFO: stderr: ""
Jan 23 14:37:41.368: INFO: stdout: "true"
Jan 23 14:37:41.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2877'
Jan 23 14:37:41.519: INFO: stderr: ""
Jan 23 14:37:41.519: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 23 14:37:41.519: INFO: e2e-test-nginx-rc-d042210c7508368404c63115014a465b-nfppf is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan 23 14:37:41.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2877'
Jan 23 14:37:41.667: INFO: stderr: ""
Jan 23 14:37:41.667: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:37:41.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2877" for this suite.
Jan 23 14:37:47.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:37:47.834: INFO: namespace kubectl-2877 deletion completed in 6.162694339s

• [SLOW TEST:112.040 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:37:47.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:38:47.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1031" for this suite.
Jan 23 14:39:10.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:39:10.132: INFO: namespace container-probe-1031 deletion completed in 22.159784845s

• [SLOW TEST:82.298 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:39:10.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 23 14:39:10.242: INFO: Waiting up to 5m0s for pod "pod-8397d263-950b-4344-88f9-482622758993" in namespace "emptydir-3641" to be "success or failure"
Jan 23 14:39:10.247: INFO: Pod "pod-8397d263-950b-4344-88f9-482622758993": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132295ms
Jan 23 14:39:12.257: INFO: Pod "pod-8397d263-950b-4344-88f9-482622758993": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014457911s
Jan 23 14:39:14.275: INFO: Pod "pod-8397d263-950b-4344-88f9-482622758993": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032500985s
Jan 23 14:39:16.284: INFO: Pod "pod-8397d263-950b-4344-88f9-482622758993": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041164775s
Jan 23 14:39:18.295: INFO: Pod "pod-8397d263-950b-4344-88f9-482622758993": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052776314s
Jan 23 14:39:20.314: INFO: Pod "pod-8397d263-950b-4344-88f9-482622758993": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071287696s
STEP: Saw pod success
Jan 23 14:39:20.314: INFO: Pod "pod-8397d263-950b-4344-88f9-482622758993" satisfied condition "success or failure"
Jan 23 14:39:20.320: INFO: Trying to get logs from node iruya-node pod pod-8397d263-950b-4344-88f9-482622758993 container test-container: 
STEP: delete the pod
Jan 23 14:39:20.461: INFO: Waiting for pod pod-8397d263-950b-4344-88f9-482622758993 to disappear
Jan 23 14:39:20.472: INFO: Pod pod-8397d263-950b-4344-88f9-482622758993 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:39:20.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3641" for this suite.
Jan 23 14:39:26.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:39:26.664: INFO: namespace emptydir-3641 deletion completed in 6.178266696s

• [SLOW TEST:16.531 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:39:26.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 23 14:39:26.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4374'
Jan 23 14:39:26.986: INFO: stderr: ""
Jan 23 14:39:26.986: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jan 23 14:39:26.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4374'
Jan 23 14:39:33.113: INFO: stderr: ""
Jan 23 14:39:33.114: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:39:33.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4374" for this suite.
Jan 23 14:39:39.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:39:39.341: INFO: namespace kubectl-4374 deletion completed in 6.168257895s

• [SLOW TEST:12.677 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:39:39.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 23 14:39:47.951: INFO: Successfully updated pod "pod-update-activedeadlineseconds-cc1a4d09-b153-44dc-95e9-c1c73d2a78e5"
Jan 23 14:39:47.951: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-cc1a4d09-b153-44dc-95e9-c1c73d2a78e5" in namespace "pods-6730" to be "terminated due to deadline exceeded"
Jan 23 14:39:47.961: INFO: Pod "pod-update-activedeadlineseconds-cc1a4d09-b153-44dc-95e9-c1c73d2a78e5": Phase="Running", Reason="", readiness=true. Elapsed: 9.823869ms
Jan 23 14:39:49.975: INFO: Pod "pod-update-activedeadlineseconds-cc1a4d09-b153-44dc-95e9-c1c73d2a78e5": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.023537264s
Jan 23 14:39:49.975: INFO: Pod "pod-update-activedeadlineseconds-cc1a4d09-b153-44dc-95e9-c1c73d2a78e5" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:39:49.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6730" for this suite.
Jan 23 14:39:56.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:39:56.208: INFO: namespace pods-6730 deletion completed in 6.208528107s

• [SLOW TEST:16.867 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:39:56.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jan 23 14:39:56.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1552'
Jan 23 14:39:56.867: INFO: stderr: ""
Jan 23 14:39:56.868: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jan 23 14:39:57.880: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:39:57.880: INFO: Found 0 / 1
Jan 23 14:39:58.878: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:39:58.878: INFO: Found 0 / 1
Jan 23 14:39:59.888: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:39:59.888: INFO: Found 0 / 1
Jan 23 14:40:00.883: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:40:00.883: INFO: Found 0 / 1
Jan 23 14:40:01.894: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:40:01.894: INFO: Found 0 / 1
Jan 23 14:40:02.880: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:40:02.881: INFO: Found 0 / 1
Jan 23 14:40:03.886: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:40:03.887: INFO: Found 1 / 1
Jan 23 14:40:03.887: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 23 14:40:03.894: INFO: Selector matched 1 pods for map[app:redis]
Jan 23 14:40:03.894: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 23 14:40:03.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dsctg redis-master --namespace=kubectl-1552'
Jan 23 14:40:04.037: INFO: stderr: ""
Jan 23 14:40:04.037: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 23 Jan 14:40:03.093 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Jan 14:40:03.093 # Server started, Redis version 3.2.12\n1:M 23 Jan 14:40:03.093 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Jan 14:40:03.094 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 23 14:40:04.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dsctg redis-master --namespace=kubectl-1552 --tail=1'
Jan 23 14:40:04.138: INFO: stderr: ""
Jan 23 14:40:04.138: INFO: stdout: "1:M 23 Jan 14:40:03.094 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 23 14:40:04.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dsctg redis-master --namespace=kubectl-1552 --limit-bytes=1'
Jan 23 14:40:04.225: INFO: stderr: ""
Jan 23 14:40:04.225: INFO: stdout: " "
STEP: exposing timestamps
Jan 23 14:40:04.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dsctg redis-master --namespace=kubectl-1552 --tail=1 --timestamps'
Jan 23 14:40:04.379: INFO: stderr: ""
Jan 23 14:40:04.379: INFO: stdout: "2020-01-23T14:40:03.094784664Z 1:M 23 Jan 14:40:03.094 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 23 14:40:06.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dsctg redis-master --namespace=kubectl-1552 --since=1s'
Jan 23 14:40:07.034: INFO: stderr: ""
Jan 23 14:40:07.034: INFO: stdout: ""
Jan 23 14:40:07.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dsctg redis-master --namespace=kubectl-1552 --since=24h'
Jan 23 14:40:07.221: INFO: stderr: ""
Jan 23 14:40:07.221: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 23 Jan 14:40:03.093 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Jan 14:40:03.093 # Server started, Redis version 3.2.12\n1:M 23 Jan 14:40:03.093 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Jan 14:40:03.094 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jan 23 14:40:07.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1552'
Jan 23 14:40:07.323: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 23 14:40:07.323: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 23 14:40:07.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1552'
Jan 23 14:40:07.440: INFO: stderr: "No resources found.\n"
Jan 23 14:40:07.440: INFO: stdout: ""
Jan 23 14:40:07.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1552 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 23 14:40:07.510: INFO: stderr: ""
Jan 23 14:40:07.510: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:40:07.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1552" for this suite.
Jan 23 14:40:13.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:40:13.692: INFO: namespace kubectl-1552 deletion completed in 6.174049025s

• [SLOW TEST:17.484 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:40:13.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:40:24.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5740" for this suite.
Jan 23 14:40:46.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:40:47.016: INFO: namespace replication-controller-5740 deletion completed in 22.110006227s

• [SLOW TEST:33.323 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:40:47.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 23 14:40:47.195: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6300,SelfLink:/api/v1/namespaces/watch-6300/configmaps/e2e-watch-test-configmap-a,UID:1ed46f16-ac54-424d-8423-215f4c54891b,ResourceVersion:21571283,Generation:0,CreationTimestamp:2020-01-23 14:40:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 23 14:40:47.195: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6300,SelfLink:/api/v1/namespaces/watch-6300/configmaps/e2e-watch-test-configmap-a,UID:1ed46f16-ac54-424d-8423-215f4c54891b,ResourceVersion:21571283,Generation:0,CreationTimestamp:2020-01-23 14:40:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 23 14:40:57.211: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6300,SelfLink:/api/v1/namespaces/watch-6300/configmaps/e2e-watch-test-configmap-a,UID:1ed46f16-ac54-424d-8423-215f4c54891b,ResourceVersion:21571296,Generation:0,CreationTimestamp:2020-01-23 14:40:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 23 14:40:57.211: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6300,SelfLink:/api/v1/namespaces/watch-6300/configmaps/e2e-watch-test-configmap-a,UID:1ed46f16-ac54-424d-8423-215f4c54891b,ResourceVersion:21571296,Generation:0,CreationTimestamp:2020-01-23 14:40:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 23 14:41:07.238: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6300,SelfLink:/api/v1/namespaces/watch-6300/configmaps/e2e-watch-test-configmap-a,UID:1ed46f16-ac54-424d-8423-215f4c54891b,ResourceVersion:21571311,Generation:0,CreationTimestamp:2020-01-23 14:40:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 23 14:41:07.239: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6300,SelfLink:/api/v1/namespaces/watch-6300/configmaps/e2e-watch-test-configmap-a,UID:1ed46f16-ac54-424d-8423-215f4c54891b,ResourceVersion:21571311,Generation:0,CreationTimestamp:2020-01-23 14:40:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 23 14:41:17.255: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6300,SelfLink:/api/v1/namespaces/watch-6300/configmaps/e2e-watch-test-configmap-a,UID:1ed46f16-ac54-424d-8423-215f4c54891b,ResourceVersion:21571325,Generation:0,CreationTimestamp:2020-01-23 14:40:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 23 14:41:17.256: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6300,SelfLink:/api/v1/namespaces/watch-6300/configmaps/e2e-watch-test-configmap-a,UID:1ed46f16-ac54-424d-8423-215f4c54891b,ResourceVersion:21571325,Generation:0,CreationTimestamp:2020-01-23 14:40:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 23 14:41:27.276: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6300,SelfLink:/api/v1/namespaces/watch-6300/configmaps/e2e-watch-test-configmap-b,UID:d42b3c70-511e-4523-b67c-08526fa6f88c,ResourceVersion:21571339,Generation:0,CreationTimestamp:2020-01-23 14:41:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 23 14:41:27.277: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6300,SelfLink:/api/v1/namespaces/watch-6300/configmaps/e2e-watch-test-configmap-b,UID:d42b3c70-511e-4523-b67c-08526fa6f88c,ResourceVersion:21571339,Generation:0,CreationTimestamp:2020-01-23 14:41:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 23 14:41:37.290: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6300,SelfLink:/api/v1/namespaces/watch-6300/configmaps/e2e-watch-test-configmap-b,UID:d42b3c70-511e-4523-b67c-08526fa6f88c,ResourceVersion:21571354,Generation:0,CreationTimestamp:2020-01-23 14:41:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 23 14:41:37.290: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6300,SelfLink:/api/v1/namespaces/watch-6300/configmaps/e2e-watch-test-configmap-b,UID:d42b3c70-511e-4523-b67c-08526fa6f88c,ResourceVersion:21571354,Generation:0,CreationTimestamp:2020-01-23 14:41:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:41:47.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6300" for this suite.
Jan 23 14:41:53.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:41:53.473: INFO: namespace watch-6300 deletion completed in 6.173915706s

• [SLOW TEST:66.454 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:41:53.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-0a9fa890-6c5d-4393-95d5-9528ba4dad3d
STEP: Creating a pod to test consume secrets
Jan 23 14:41:53.587: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f8ef0ed0-04bb-41a5-8041-d499ea9bb702" in namespace "projected-4685" to be "success or failure"
Jan 23 14:41:53.603: INFO: Pod "pod-projected-secrets-f8ef0ed0-04bb-41a5-8041-d499ea9bb702": Phase="Pending", Reason="", readiness=false. Elapsed: 14.99403ms
Jan 23 14:41:55.616: INFO: Pod "pod-projected-secrets-f8ef0ed0-04bb-41a5-8041-d499ea9bb702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028062884s
Jan 23 14:41:57.624: INFO: Pod "pod-projected-secrets-f8ef0ed0-04bb-41a5-8041-d499ea9bb702": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036674864s
Jan 23 14:41:59.639: INFO: Pod "pod-projected-secrets-f8ef0ed0-04bb-41a5-8041-d499ea9bb702": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051223797s
Jan 23 14:42:01.648: INFO: Pod "pod-projected-secrets-f8ef0ed0-04bb-41a5-8041-d499ea9bb702": Phase="Running", Reason="", readiness=true. Elapsed: 8.060815039s
Jan 23 14:42:03.658: INFO: Pod "pod-projected-secrets-f8ef0ed0-04bb-41a5-8041-d499ea9bb702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069893945s
STEP: Saw pod success
Jan 23 14:42:03.658: INFO: Pod "pod-projected-secrets-f8ef0ed0-04bb-41a5-8041-d499ea9bb702" satisfied condition "success or failure"
Jan 23 14:42:03.661: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f8ef0ed0-04bb-41a5-8041-d499ea9bb702 container secret-volume-test: 
STEP: delete the pod
Jan 23 14:42:03.863: INFO: Waiting for pod pod-projected-secrets-f8ef0ed0-04bb-41a5-8041-d499ea9bb702 to disappear
Jan 23 14:42:03.885: INFO: Pod pod-projected-secrets-f8ef0ed0-04bb-41a5-8041-d499ea9bb702 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:42:03.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4685" for this suite.
Jan 23 14:42:09.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:42:10.017: INFO: namespace projected-4685 deletion completed in 6.116376192s

• [SLOW TEST:16.544 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:42:10.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 14:42:10.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 23 14:42:10.267: INFO: stderr: ""
Jan 23 14:42:10.267: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:42:10.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5135" for this suite.
Jan 23 14:42:16.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:42:16.544: INFO: namespace kubectl-5135 deletion completed in 6.268059176s

• [SLOW TEST:6.526 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:42:16.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 23 14:42:16.690: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14f68f61-ef24-4f0c-ad3f-9144573c41fb" in namespace "downward-api-3767" to be "success or failure"
Jan 23 14:42:16.716: INFO: Pod "downwardapi-volume-14f68f61-ef24-4f0c-ad3f-9144573c41fb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.090164ms
Jan 23 14:42:18.725: INFO: Pod "downwardapi-volume-14f68f61-ef24-4f0c-ad3f-9144573c41fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035285794s
Jan 23 14:42:20.735: INFO: Pod "downwardapi-volume-14f68f61-ef24-4f0c-ad3f-9144573c41fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045275731s
Jan 23 14:42:22.744: INFO: Pod "downwardapi-volume-14f68f61-ef24-4f0c-ad3f-9144573c41fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054134589s
Jan 23 14:42:24.752: INFO: Pod "downwardapi-volume-14f68f61-ef24-4f0c-ad3f-9144573c41fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062089504s
STEP: Saw pod success
Jan 23 14:42:24.752: INFO: Pod "downwardapi-volume-14f68f61-ef24-4f0c-ad3f-9144573c41fb" satisfied condition "success or failure"
Jan 23 14:42:24.756: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-14f68f61-ef24-4f0c-ad3f-9144573c41fb container client-container: 
STEP: delete the pod
Jan 23 14:42:24.908: INFO: Waiting for pod downwardapi-volume-14f68f61-ef24-4f0c-ad3f-9144573c41fb to disappear
Jan 23 14:42:24.937: INFO: Pod downwardapi-volume-14f68f61-ef24-4f0c-ad3f-9144573c41fb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:42:24.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3767" for this suite.
Jan 23 14:42:31.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:42:31.230: INFO: namespace downward-api-3767 deletion completed in 6.202572996s

• [SLOW TEST:14.686 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:42:31.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:42:39.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4499" for this suite.
Jan 23 14:43:31.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:43:31.626: INFO: namespace kubelet-test-4499 deletion completed in 52.237745026s

• [SLOW TEST:60.396 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:43:31.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-d42e2952-cd7a-4b8e-86a9-b0c9fb31e4f2
STEP: Creating a pod to test consume configMaps
Jan 23 14:43:31.799: INFO: Waiting up to 5m0s for pod "pod-configmaps-310dc1a0-5a05-4194-91e6-ba27e5e0bc58" in namespace "configmap-6097" to be "success or failure"
Jan 23 14:43:31.881: INFO: Pod "pod-configmaps-310dc1a0-5a05-4194-91e6-ba27e5e0bc58": Phase="Pending", Reason="", readiness=false. Elapsed: 81.581625ms
Jan 23 14:43:33.888: INFO: Pod "pod-configmaps-310dc1a0-5a05-4194-91e6-ba27e5e0bc58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089386847s
Jan 23 14:43:35.899: INFO: Pod "pod-configmaps-310dc1a0-5a05-4194-91e6-ba27e5e0bc58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099627833s
Jan 23 14:43:37.913: INFO: Pod "pod-configmaps-310dc1a0-5a05-4194-91e6-ba27e5e0bc58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113743543s
Jan 23 14:43:39.925: INFO: Pod "pod-configmaps-310dc1a0-5a05-4194-91e6-ba27e5e0bc58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125619939s
Jan 23 14:43:41.933: INFO: Pod "pod-configmaps-310dc1a0-5a05-4194-91e6-ba27e5e0bc58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134487782s
STEP: Saw pod success
Jan 23 14:43:41.934: INFO: Pod "pod-configmaps-310dc1a0-5a05-4194-91e6-ba27e5e0bc58" satisfied condition "success or failure"
Jan 23 14:43:41.944: INFO: Trying to get logs from node iruya-node pod pod-configmaps-310dc1a0-5a05-4194-91e6-ba27e5e0bc58 container configmap-volume-test: 
STEP: delete the pod
Jan 23 14:43:41.990: INFO: Waiting for pod pod-configmaps-310dc1a0-5a05-4194-91e6-ba27e5e0bc58 to disappear
Jan 23 14:43:41.994: INFO: Pod pod-configmaps-310dc1a0-5a05-4194-91e6-ba27e5e0bc58 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:43:41.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6097" for this suite.
Jan 23 14:43:48.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:43:48.241: INFO: namespace configmap-6097 deletion completed in 6.240553193s

• [SLOW TEST:16.610 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:43:48.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:43:48.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4441" for this suite.
Jan 23 14:43:54.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:43:54.521: INFO: namespace services-4441 deletion completed in 6.148465484s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.280 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:43:54.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:43:59.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-319" for this suite.
Jan 23 14:44:06.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:44:06.278: INFO: namespace watch-319 deletion completed in 6.241135639s

• [SLOW TEST:11.756 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:44:06.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0123 14:44:22.929976       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 23 14:44:22.930: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:44:22.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8742" for this suite.
Jan 23 14:44:40.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:44:40.910: INFO: namespace gc-8742 deletion completed in 17.975042593s

• [SLOW TEST:34.631 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:44:40.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 23 14:44:51.543: INFO: Successfully updated pod "pod-update-f5ec5dcf-c6d0-4162-80c1-47cbd9eeced5"
STEP: verifying the updated pod is in kubernetes
Jan 23 14:44:51.558: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:44:51.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6815" for this suite.
Jan 23 14:45:13.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:45:13.755: INFO: namespace pods-6815 deletion completed in 22.188314515s

• [SLOW TEST:32.844 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:45:13.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-a19acfbd-398a-4cf6-b8ec-f622f86b321a in namespace container-probe-8169
Jan 23 14:45:21.975: INFO: Started pod liveness-a19acfbd-398a-4cf6-b8ec-f622f86b321a in namespace container-probe-8169
STEP: checking the pod's current state and verifying that restartCount is present
Jan 23 14:45:21.979: INFO: Initial restart count of pod liveness-a19acfbd-398a-4cf6-b8ec-f622f86b321a is 0
Jan 23 14:45:38.073: INFO: Restart count of pod container-probe-8169/liveness-a19acfbd-398a-4cf6-b8ec-f622f86b321a is now 1 (16.093999549s elapsed)
Jan 23 14:45:56.173: INFO: Restart count of pod container-probe-8169/liveness-a19acfbd-398a-4cf6-b8ec-f622f86b321a is now 2 (34.193360439s elapsed)
Jan 23 14:46:16.437: INFO: Restart count of pod container-probe-8169/liveness-a19acfbd-398a-4cf6-b8ec-f622f86b321a is now 3 (54.457821581s elapsed)
Jan 23 14:46:36.613: INFO: Restart count of pod container-probe-8169/liveness-a19acfbd-398a-4cf6-b8ec-f622f86b321a is now 4 (1m14.633616178s elapsed)
Jan 23 14:47:35.013: INFO: Restart count of pod container-probe-8169/liveness-a19acfbd-398a-4cf6-b8ec-f622f86b321a is now 5 (2m13.033601831s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:47:35.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8169" for this suite.
Jan 23 14:47:41.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:47:41.230: INFO: namespace container-probe-8169 deletion completed in 6.162874116s

• [SLOW TEST:147.474 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:47:41.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Jan 23 14:47:41.298: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix273846598/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:47:41.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3809" for this suite.
Jan 23 14:47:47.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:47:47.606: INFO: namespace kubectl-3809 deletion completed in 6.18163285s

• [SLOW TEST:6.375 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:47:47.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-5242/configmap-test-73970947-151c-4dc1-8e41-fa3351ca4fc3
STEP: Creating a pod to test consume configMaps
Jan 23 14:47:47.764: INFO: Waiting up to 5m0s for pod "pod-configmaps-ef515887-935f-40fe-a789-dbf534d925e4" in namespace "configmap-5242" to be "success or failure"
Jan 23 14:47:47.769: INFO: Pod "pod-configmaps-ef515887-935f-40fe-a789-dbf534d925e4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.419659ms
Jan 23 14:47:49.804: INFO: Pod "pod-configmaps-ef515887-935f-40fe-a789-dbf534d925e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040563491s
Jan 23 14:47:51.829: INFO: Pod "pod-configmaps-ef515887-935f-40fe-a789-dbf534d925e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065004138s
Jan 23 14:47:53.851: INFO: Pod "pod-configmaps-ef515887-935f-40fe-a789-dbf534d925e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087277313s
Jan 23 14:47:55.868: INFO: Pod "pod-configmaps-ef515887-935f-40fe-a789-dbf534d925e4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104061159s
Jan 23 14:47:57.878: INFO: Pod "pod-configmaps-ef515887-935f-40fe-a789-dbf534d925e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114145612s
STEP: Saw pod success
Jan 23 14:47:57.878: INFO: Pod "pod-configmaps-ef515887-935f-40fe-a789-dbf534d925e4" satisfied condition "success or failure"
Jan 23 14:47:57.882: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ef515887-935f-40fe-a789-dbf534d925e4 container env-test: 
STEP: delete the pod
Jan 23 14:47:58.013: INFO: Waiting for pod pod-configmaps-ef515887-935f-40fe-a789-dbf534d925e4 to disappear
Jan 23 14:47:58.018: INFO: Pod pod-configmaps-ef515887-935f-40fe-a789-dbf534d925e4 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:47:58.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5242" for this suite.
Jan 23 14:48:04.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:48:04.193: INFO: namespace configmap-5242 deletion completed in 6.168367636s

• [SLOW TEST:16.587 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:48:04.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 23 14:48:04.339: INFO: Waiting up to 5m0s for pod "pod-65b82d99-ad1e-4f8d-98a2-87758e861e7b" in namespace "emptydir-1564" to be "success or failure"
Jan 23 14:48:04.343: INFO: Pod "pod-65b82d99-ad1e-4f8d-98a2-87758e861e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.884444ms
Jan 23 14:48:06.355: INFO: Pod "pod-65b82d99-ad1e-4f8d-98a2-87758e861e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015612858s
Jan 23 14:48:08.364: INFO: Pod "pod-65b82d99-ad1e-4f8d-98a2-87758e861e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024554579s
Jan 23 14:48:10.370: INFO: Pod "pod-65b82d99-ad1e-4f8d-98a2-87758e861e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031326047s
Jan 23 14:48:12.378: INFO: Pod "pod-65b82d99-ad1e-4f8d-98a2-87758e861e7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039011395s
STEP: Saw pod success
Jan 23 14:48:12.378: INFO: Pod "pod-65b82d99-ad1e-4f8d-98a2-87758e861e7b" satisfied condition "success or failure"
Jan 23 14:48:12.383: INFO: Trying to get logs from node iruya-node pod pod-65b82d99-ad1e-4f8d-98a2-87758e861e7b container test-container: 
STEP: delete the pod
Jan 23 14:48:12.571: INFO: Waiting for pod pod-65b82d99-ad1e-4f8d-98a2-87758e861e7b to disappear
Jan 23 14:48:12.582: INFO: Pod pod-65b82d99-ad1e-4f8d-98a2-87758e861e7b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:48:12.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1564" for this suite.
Jan 23 14:48:18.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:48:18.819: INFO: namespace emptydir-1564 deletion completed in 6.230460603s

• [SLOW TEST:14.626 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:48:18.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 23 14:48:38.995: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9016 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 14:48:38.995: INFO: >>> kubeConfig: /root/.kube/config
I0123 14:48:39.060642       8 log.go:172] (0xc002393600) (0xc002d86c80) Create stream
I0123 14:48:39.060708       8 log.go:172] (0xc002393600) (0xc002d86c80) Stream added, broadcasting: 1
I0123 14:48:39.069466       8 log.go:172] (0xc002393600) Reply frame received for 1
I0123 14:48:39.069516       8 log.go:172] (0xc002393600) (0xc0019f8dc0) Create stream
I0123 14:48:39.069528       8 log.go:172] (0xc002393600) (0xc0019f8dc0) Stream added, broadcasting: 3
I0123 14:48:39.071685       8 log.go:172] (0xc002393600) Reply frame received for 3
I0123 14:48:39.071799       8 log.go:172] (0xc002393600) (0xc002d86d20) Create stream
I0123 14:48:39.071889       8 log.go:172] (0xc002393600) (0xc002d86d20) Stream added, broadcasting: 5
I0123 14:48:39.076632       8 log.go:172] (0xc002393600) Reply frame received for 5
I0123 14:48:39.201452       8 log.go:172] (0xc002393600) Data frame received for 3
I0123 14:48:39.201531       8 log.go:172] (0xc0019f8dc0) (3) Data frame handling
I0123 14:48:39.201560       8 log.go:172] (0xc0019f8dc0) (3) Data frame sent
I0123 14:48:39.345175       8 log.go:172] (0xc002393600) Data frame received for 1
I0123 14:48:39.345229       8 log.go:172] (0xc002d86c80) (1) Data frame handling
I0123 14:48:39.345292       8 log.go:172] (0xc002d86c80) (1) Data frame sent
I0123 14:48:39.346217       8 log.go:172] (0xc002393600) (0xc002d86c80) Stream removed, broadcasting: 1
I0123 14:48:39.351624       8 log.go:172] (0xc002393600) (0xc0019f8dc0) Stream removed, broadcasting: 3
I0123 14:48:39.351717       8 log.go:172] (0xc002393600) (0xc002d86d20) Stream removed, broadcasting: 5
I0123 14:48:39.351762       8 log.go:172] (0xc002393600) (0xc002d86c80) Stream removed, broadcasting: 1
I0123 14:48:39.351775       8 log.go:172] (0xc002393600) (0xc0019f8dc0) Stream removed, broadcasting: 3
I0123 14:48:39.351788       8 log.go:172] (0xc002393600) (0xc002d86d20) Stream removed, broadcasting: 5
Jan 23 14:48:39.351: INFO: Exec stderr: ""
Jan 23 14:48:39.351: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9016 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 14:48:39.351: INFO: >>> kubeConfig: /root/.kube/config
I0123 14:48:39.352346       8 log.go:172] (0xc002393600) Go away received
I0123 14:48:39.429700       8 log.go:172] (0xc002bf1080) (0xc0019f92c0) Create stream
I0123 14:48:39.429858       8 log.go:172] (0xc002bf1080) (0xc0019f92c0) Stream added, broadcasting: 1
I0123 14:48:39.438878       8 log.go:172] (0xc002bf1080) Reply frame received for 1
I0123 14:48:39.439054       8 log.go:172] (0xc002bf1080) (0xc0015354a0) Create stream
I0123 14:48:39.439072       8 log.go:172] (0xc002bf1080) (0xc0015354a0) Stream added, broadcasting: 3
I0123 14:48:39.441143       8 log.go:172] (0xc002bf1080) Reply frame received for 3
I0123 14:48:39.441166       8 log.go:172] (0xc002bf1080) (0xc0019f9360) Create stream
I0123 14:48:39.441173       8 log.go:172] (0xc002bf1080) (0xc0019f9360) Stream added, broadcasting: 5
I0123 14:48:39.442356       8 log.go:172] (0xc002bf1080) Reply frame received for 5
I0123 14:48:39.556577       8 log.go:172] (0xc002bf1080) Data frame received for 3
I0123 14:48:39.556713       8 log.go:172] (0xc0015354a0) (3) Data frame handling
I0123 14:48:39.556746       8 log.go:172] (0xc0015354a0) (3) Data frame sent
I0123 14:48:39.682305       8 log.go:172] (0xc002bf1080) (0xc0015354a0) Stream removed, broadcasting: 3
I0123 14:48:39.682432       8 log.go:172] (0xc002bf1080) Data frame received for 1
I0123 14:48:39.682456       8 log.go:172] (0xc002bf1080) (0xc0019f9360) Stream removed, broadcasting: 5
I0123 14:48:39.682501       8 log.go:172] (0xc0019f92c0) (1) Data frame handling
I0123 14:48:39.682608       8 log.go:172] (0xc0019f92c0) (1) Data frame sent
I0123 14:48:39.682638       8 log.go:172] (0xc002bf1080) (0xc0019f92c0) Stream removed, broadcasting: 1
I0123 14:48:39.682665       8 log.go:172] (0xc002bf1080) Go away received
I0123 14:48:39.683201       8 log.go:172] (0xc002bf1080) (0xc0019f92c0) Stream removed, broadcasting: 1
I0123 14:48:39.683234       8 log.go:172] (0xc002bf1080) (0xc0015354a0) Stream removed, broadcasting: 3
I0123 14:48:39.683252       8 log.go:172] (0xc002bf1080) (0xc0019f9360) Stream removed, broadcasting: 5
Jan 23 14:48:39.683: INFO: Exec stderr: ""
Jan 23 14:48:39.683: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9016 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 14:48:39.683: INFO: >>> kubeConfig: /root/.kube/config
I0123 14:48:39.757163       8 log.go:172] (0xc002c968f0) (0xc002d87040) Create stream
I0123 14:48:39.757207       8 log.go:172] (0xc002c968f0) (0xc002d87040) Stream added, broadcasting: 1
I0123 14:48:39.762451       8 log.go:172] (0xc002c968f0) Reply frame received for 1
I0123 14:48:39.762486       8 log.go:172] (0xc002c968f0) (0xc0019f9400) Create stream
I0123 14:48:39.762498       8 log.go:172] (0xc002c968f0) (0xc0019f9400) Stream added, broadcasting: 3
I0123 14:48:39.764741       8 log.go:172] (0xc002c968f0) Reply frame received for 3
I0123 14:48:39.764777       8 log.go:172] (0xc002c968f0) (0xc001535540) Create stream
I0123 14:48:39.764788       8 log.go:172] (0xc002c968f0) (0xc001535540) Stream added, broadcasting: 5
I0123 14:48:39.766572       8 log.go:172] (0xc002c968f0) Reply frame received for 5
I0123 14:48:39.909590       8 log.go:172] (0xc002c968f0) Data frame received for 3
I0123 14:48:39.910083       8 log.go:172] (0xc0019f9400) (3) Data frame handling
I0123 14:48:39.910128       8 log.go:172] (0xc0019f9400) (3) Data frame sent
I0123 14:48:40.066338       8 log.go:172] (0xc002c968f0) Data frame received for 1
I0123 14:48:40.066409       8 log.go:172] (0xc002c968f0) (0xc0019f9400) Stream removed, broadcasting: 3
I0123 14:48:40.066456       8 log.go:172] (0xc002d87040) (1) Data frame handling
I0123 14:48:40.066476       8 log.go:172] (0xc002d87040) (1) Data frame sent
I0123 14:48:40.066492       8 log.go:172] (0xc002c968f0) (0xc001535540) Stream removed, broadcasting: 5
I0123 14:48:40.066512       8 log.go:172] (0xc002c968f0) (0xc002d87040) Stream removed, broadcasting: 1
I0123 14:48:40.066527       8 log.go:172] (0xc002c968f0) Go away received
I0123 14:48:40.066708       8 log.go:172] (0xc002c968f0) (0xc002d87040) Stream removed, broadcasting: 1
I0123 14:48:40.066727       8 log.go:172] (0xc002c968f0) (0xc0019f9400) Stream removed, broadcasting: 3
I0123 14:48:40.066741       8 log.go:172] (0xc002c968f0) (0xc001535540) Stream removed, broadcasting: 5
Jan 23 14:48:40.066: INFO: Exec stderr: ""
Jan 23 14:48:40.066: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9016 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 14:48:40.067: INFO: >>> kubeConfig: /root/.kube/config
I0123 14:48:40.124211       8 log.go:172] (0xc001c2b970) (0xc0015359a0) Create stream
I0123 14:48:40.124365       8 log.go:172] (0xc001c2b970) (0xc0015359a0) Stream added, broadcasting: 1
I0123 14:48:40.136660       8 log.go:172] (0xc001c2b970) Reply frame received for 1
I0123 14:48:40.136709       8 log.go:172] (0xc001c2b970) (0xc00096dae0) Create stream
I0123 14:48:40.136715       8 log.go:172] (0xc001c2b970) (0xc00096dae0) Stream added, broadcasting: 3
I0123 14:48:40.138261       8 log.go:172] (0xc001c2b970) Reply frame received for 3
I0123 14:48:40.138289       8 log.go:172] (0xc001c2b970) (0xc002132b40) Create stream
I0123 14:48:40.138297       8 log.go:172] (0xc001c2b970) (0xc002132b40) Stream added, broadcasting: 5
I0123 14:48:40.140371       8 log.go:172] (0xc001c2b970) Reply frame received for 5
I0123 14:48:40.258910       8 log.go:172] (0xc001c2b970) Data frame received for 3
I0123 14:48:40.259126       8 log.go:172] (0xc00096dae0) (3) Data frame handling
I0123 14:48:40.259166       8 log.go:172] (0xc00096dae0) (3) Data frame sent
I0123 14:48:40.444137       8 log.go:172] (0xc001c2b970) Data frame received for 1
I0123 14:48:40.444282       8 log.go:172] (0xc0015359a0) (1) Data frame handling
I0123 14:48:40.444331       8 log.go:172] (0xc0015359a0) (1) Data frame sent
I0123 14:48:40.444355       8 log.go:172] (0xc001c2b970) (0xc0015359a0) Stream removed, broadcasting: 1
I0123 14:48:40.444533       8 log.go:172] (0xc001c2b970) (0xc00096dae0) Stream removed, broadcasting: 3
I0123 14:48:40.444568       8 log.go:172] (0xc001c2b970) (0xc002132b40) Stream removed, broadcasting: 5
I0123 14:48:40.444581       8 log.go:172] (0xc001c2b970) Go away received
I0123 14:48:40.444668       8 log.go:172] (0xc001c2b970) (0xc0015359a0) Stream removed, broadcasting: 1
I0123 14:48:40.444698       8 log.go:172] (0xc001c2b970) (0xc00096dae0) Stream removed, broadcasting: 3
I0123 14:48:40.444704       8 log.go:172] (0xc001c2b970) (0xc002132b40) Stream removed, broadcasting: 5
Jan 23 14:48:40.444: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 23 14:48:40.444: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9016 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 14:48:40.445: INFO: >>> kubeConfig: /root/.kube/config
I0123 14:48:40.527762       8 log.go:172] (0xc00223f080) (0xc002132d20) Create stream
I0123 14:48:40.527981       8 log.go:172] (0xc00223f080) (0xc002132d20) Stream added, broadcasting: 1
I0123 14:48:40.585483       8 log.go:172] (0xc00223f080) Reply frame received for 1
I0123 14:48:40.585746       8 log.go:172] (0xc00223f080) (0xc002d870e0) Create stream
I0123 14:48:40.585769       8 log.go:172] (0xc00223f080) (0xc002d870e0) Stream added, broadcasting: 3
I0123 14:48:40.593107       8 log.go:172] (0xc00223f080) Reply frame received for 3
I0123 14:48:40.593221       8 log.go:172] (0xc00223f080) (0xc002132dc0) Create stream
I0123 14:48:40.593241       8 log.go:172] (0xc00223f080) (0xc002132dc0) Stream added, broadcasting: 5
I0123 14:48:40.598309       8 log.go:172] (0xc00223f080) Reply frame received for 5
I0123 14:48:40.779368       8 log.go:172] (0xc00223f080) Data frame received for 3
I0123 14:48:40.779477       8 log.go:172] (0xc002d870e0) (3) Data frame handling
I0123 14:48:40.779501       8 log.go:172] (0xc002d870e0) (3) Data frame sent
I0123 14:48:40.909641       8 log.go:172] (0xc00223f080) (0xc002d870e0) Stream removed, broadcasting: 3
I0123 14:48:40.909905       8 log.go:172] (0xc00223f080) Data frame received for 1
I0123 14:48:40.909940       8 log.go:172] (0xc00223f080) (0xc002132dc0) Stream removed, broadcasting: 5
I0123 14:48:40.909974       8 log.go:172] (0xc002132d20) (1) Data frame handling
I0123 14:48:40.910020       8 log.go:172] (0xc002132d20) (1) Data frame sent
I0123 14:48:40.910027       8 log.go:172] (0xc00223f080) (0xc002132d20) Stream removed, broadcasting: 1
I0123 14:48:40.910043       8 log.go:172] (0xc00223f080) Go away received
I0123 14:48:40.910441       8 log.go:172] (0xc00223f080) (0xc002132d20) Stream removed, broadcasting: 1
I0123 14:48:40.910455       8 log.go:172] (0xc00223f080) (0xc002d870e0) Stream removed, broadcasting: 3
I0123 14:48:40.910461       8 log.go:172] (0xc00223f080) (0xc002132dc0) Stream removed, broadcasting: 5
Jan 23 14:48:40.910: INFO: Exec stderr: ""
Jan 23 14:48:40.910: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9016 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 14:48:40.910: INFO: >>> kubeConfig: /root/.kube/config
I0123 14:48:41.053907       8 log.go:172] (0xc001b8d550) (0xc001d2c8c0) Create stream
I0123 14:48:41.053979       8 log.go:172] (0xc001b8d550) (0xc001d2c8c0) Stream added, broadcasting: 1
I0123 14:48:41.059495       8 log.go:172] (0xc001b8d550) Reply frame received for 1
I0123 14:48:41.059519       8 log.go:172] (0xc001b8d550) (0xc002132e60) Create stream
I0123 14:48:41.059527       8 log.go:172] (0xc001b8d550) (0xc002132e60) Stream added, broadcasting: 3
I0123 14:48:41.060795       8 log.go:172] (0xc001b8d550) Reply frame received for 3
I0123 14:48:41.060816       8 log.go:172] (0xc001b8d550) (0xc002d87180) Create stream
I0123 14:48:41.060823       8 log.go:172] (0xc001b8d550) (0xc002d87180) Stream added, broadcasting: 5
I0123 14:48:41.062010       8 log.go:172] (0xc001b8d550) Reply frame received for 5
I0123 14:48:41.178115       8 log.go:172] (0xc001b8d550) Data frame received for 3
I0123 14:48:41.178174       8 log.go:172] (0xc002132e60) (3) Data frame handling
I0123 14:48:41.178192       8 log.go:172] (0xc002132e60) (3) Data frame sent
I0123 14:48:41.326175       8 log.go:172] (0xc001b8d550) Data frame received for 1
I0123 14:48:41.326322       8 log.go:172] (0xc001b8d550) (0xc002d87180) Stream removed, broadcasting: 5
I0123 14:48:41.326364       8 log.go:172] (0xc001d2c8c0) (1) Data frame handling
I0123 14:48:41.326392       8 log.go:172] (0xc001d2c8c0) (1) Data frame sent
I0123 14:48:41.326406       8 log.go:172] (0xc001b8d550) (0xc002132e60) Stream removed, broadcasting: 3
I0123 14:48:41.326443       8 log.go:172] (0xc001b8d550) (0xc001d2c8c0) Stream removed, broadcasting: 1
I0123 14:48:41.326452       8 log.go:172] (0xc001b8d550) Go away received
I0123 14:48:41.326641       8 log.go:172] (0xc001b8d550) (0xc001d2c8c0) Stream removed, broadcasting: 1
I0123 14:48:41.326656       8 log.go:172] (0xc001b8d550) (0xc002132e60) Stream removed, broadcasting: 3
I0123 14:48:41.326664       8 log.go:172] (0xc001b8d550) (0xc002d87180) Stream removed, broadcasting: 5
Jan 23 14:48:41.326: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 23 14:48:41.326: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9016 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 14:48:41.327: INFO: >>> kubeConfig: /root/.kube/config
I0123 14:48:41.409683       8 log.go:172] (0xc00297c210) (0xc0021332c0) Create stream
I0123 14:48:41.409827       8 log.go:172] (0xc00297c210) (0xc0021332c0) Stream added, broadcasting: 1
I0123 14:48:41.414624       8 log.go:172] (0xc00297c210) Reply frame received for 1
I0123 14:48:41.414657       8 log.go:172] (0xc00297c210) (0xc0019f94a0) Create stream
I0123 14:48:41.414666       8 log.go:172] (0xc00297c210) (0xc0019f94a0) Stream added, broadcasting: 3
I0123 14:48:41.416534       8 log.go:172] (0xc00297c210) Reply frame received for 3
I0123 14:48:41.416551       8 log.go:172] (0xc00297c210) (0xc0019f9540) Create stream
I0123 14:48:41.416559       8 log.go:172] (0xc00297c210) (0xc0019f9540) Stream added, broadcasting: 5
I0123 14:48:41.418266       8 log.go:172] (0xc00297c210) Reply frame received for 5
I0123 14:48:41.514532       8 log.go:172] (0xc00297c210) Data frame received for 3
I0123 14:48:41.514703       8 log.go:172] (0xc0019f94a0) (3) Data frame handling
I0123 14:48:41.514746       8 log.go:172] (0xc0019f94a0) (3) Data frame sent
I0123 14:48:41.662953       8 log.go:172] (0xc00297c210) Data frame received for 1
I0123 14:48:41.663131       8 log.go:172] (0xc00297c210) (0xc0019f9540) Stream removed, broadcasting: 5
I0123 14:48:41.663392       8 log.go:172] (0xc0021332c0) (1) Data frame handling
I0123 14:48:41.663529       8 log.go:172] (0xc0021332c0) (1) Data frame sent
I0123 14:48:41.663592       8 log.go:172] (0xc00297c210) (0xc0019f94a0) Stream removed, broadcasting: 3
I0123 14:48:41.663650       8 log.go:172] (0xc00297c210) (0xc0021332c0) Stream removed, broadcasting: 1
I0123 14:48:41.663675       8 log.go:172] (0xc00297c210) Go away received
I0123 14:48:41.663768       8 log.go:172] (0xc00297c210) (0xc0021332c0) Stream removed, broadcasting: 1
I0123 14:48:41.663779       8 log.go:172] (0xc00297c210) (0xc0019f94a0) Stream removed, broadcasting: 3
I0123 14:48:41.663786       8 log.go:172] (0xc00297c210) (0xc0019f9540) Stream removed, broadcasting: 5
Jan 23 14:48:41.663: INFO: Exec stderr: ""
Jan 23 14:48:41.663: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9016 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 14:48:41.664: INFO: >>> kubeConfig: /root/.kube/config
I0123 14:48:41.717045       8 log.go:172] (0xc001b8dd90) (0xc001d2ca00) Create stream
I0123 14:48:41.717076       8 log.go:172] (0xc001b8dd90) (0xc001d2ca00) Stream added, broadcasting: 1
I0123 14:48:41.721811       8 log.go:172] (0xc001b8dd90) Reply frame received for 1
I0123 14:48:41.721830       8 log.go:172] (0xc001b8dd90) (0xc002133360) Create stream
I0123 14:48:41.721837       8 log.go:172] (0xc001b8dd90) (0xc002133360) Stream added, broadcasting: 3
I0123 14:48:41.723069       8 log.go:172] (0xc001b8dd90) Reply frame received for 3
I0123 14:48:41.723101       8 log.go:172] (0xc001b8dd90) (0xc001535a40) Create stream
I0123 14:48:41.723112       8 log.go:172] (0xc001b8dd90) (0xc001535a40) Stream added, broadcasting: 5
I0123 14:48:41.724414       8 log.go:172] (0xc001b8dd90) Reply frame received for 5
I0123 14:48:41.818400       8 log.go:172] (0xc001b8dd90) Data frame received for 3
I0123 14:48:41.818578       8 log.go:172] (0xc002133360) (3) Data frame handling
I0123 14:48:41.818605       8 log.go:172] (0xc002133360) (3) Data frame sent
I0123 14:48:41.960561       8 log.go:172] (0xc001b8dd90) (0xc002133360) Stream removed, broadcasting: 3
I0123 14:48:41.960873       8 log.go:172] (0xc001b8dd90) Data frame received for 1
I0123 14:48:41.960905       8 log.go:172] (0xc001d2ca00) (1) Data frame handling
I0123 14:48:41.960951       8 log.go:172] (0xc001b8dd90) (0xc001535a40) Stream removed, broadcasting: 5
I0123 14:48:41.961024       8 log.go:172] (0xc001d2ca00) (1) Data frame sent
I0123 14:48:41.961040       8 log.go:172] (0xc001b8dd90) (0xc001d2ca00) Stream removed, broadcasting: 1
I0123 14:48:41.961067       8 log.go:172] (0xc001b8dd90) Go away received
I0123 14:48:41.961372       8 log.go:172] (0xc001b8dd90) (0xc001d2ca00) Stream removed, broadcasting: 1
I0123 14:48:41.961550       8 log.go:172] (0xc001b8dd90) (0xc002133360) Stream removed, broadcasting: 3
I0123 14:48:41.961596       8 log.go:172] (0xc001b8dd90) (0xc001535a40) Stream removed, broadcasting: 5
Jan 23 14:48:41.961: INFO: Exec stderr: ""
Jan 23 14:48:41.961: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9016 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 14:48:41.962: INFO: >>> kubeConfig: /root/.kube/config
I0123 14:48:42.026095       8 log.go:172] (0xc002910370) (0xc002d87540) Create stream
I0123 14:48:42.026312       8 log.go:172] (0xc002910370) (0xc002d87540) Stream added, broadcasting: 1
I0123 14:48:42.030331       8 log.go:172] (0xc002910370) Reply frame received for 1
I0123 14:48:42.030398       8 log.go:172] (0xc002910370) (0xc001535ae0) Create stream
I0123 14:48:42.030411       8 log.go:172] (0xc002910370) (0xc001535ae0) Stream added, broadcasting: 3
I0123 14:48:42.032448       8 log.go:172] (0xc002910370) Reply frame received for 3
I0123 14:48:42.032482       8 log.go:172] (0xc002910370) (0xc001d2cd20) Create stream
I0123 14:48:42.032497       8 log.go:172] (0xc002910370) (0xc001d2cd20) Stream added, broadcasting: 5
I0123 14:48:42.037350       8 log.go:172] (0xc002910370) Reply frame received for 5
I0123 14:48:42.159407       8 log.go:172] (0xc002910370) Data frame received for 3
I0123 14:48:42.159779       8 log.go:172] (0xc001535ae0) (3) Data frame handling
I0123 14:48:42.159823       8 log.go:172] (0xc001535ae0) (3) Data frame sent
I0123 14:48:42.309333       8 log.go:172] (0xc002910370) Data frame received for 1
I0123 14:48:42.309515       8 log.go:172] (0xc002910370) (0xc001535ae0) Stream removed, broadcasting: 3
I0123 14:48:42.309586       8 log.go:172] (0xc002d87540) (1) Data frame handling
I0123 14:48:42.309702       8 log.go:172] (0xc002d87540) (1) Data frame sent
I0123 14:48:42.309741       8 log.go:172] (0xc002910370) (0xc001d2cd20) Stream removed, broadcasting: 5
I0123 14:48:42.309810       8 log.go:172] (0xc002910370) (0xc002d87540) Stream removed, broadcasting: 1
I0123 14:48:42.309851       8 log.go:172] (0xc002910370) Go away received
I0123 14:48:42.310140       8 log.go:172] (0xc002910370) (0xc002d87540) Stream removed, broadcasting: 1
I0123 14:48:42.310187       8 log.go:172] (0xc002910370) (0xc001535ae0) Stream removed, broadcasting: 3
I0123 14:48:42.310197       8 log.go:172] (0xc002910370) (0xc001d2cd20) Stream removed, broadcasting: 5
Jan 23 14:48:42.310: INFO: Exec stderr: ""
Jan 23 14:48:42.310: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9016 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 14:48:42.310: INFO: >>> kubeConfig: /root/.kube/config
I0123 14:48:42.363282       8 log.go:172] (0xc00297cf20) (0xc002133720) Create stream
I0123 14:48:42.363417       8 log.go:172] (0xc00297cf20) (0xc002133720) Stream added, broadcasting: 1
I0123 14:48:42.373319       8 log.go:172] (0xc00297cf20) Reply frame received for 1
I0123 14:48:42.373384       8 log.go:172] (0xc00297cf20) (0xc001535cc0) Create stream
I0123 14:48:42.373399       8 log.go:172] (0xc00297cf20) (0xc001535cc0) Stream added, broadcasting: 3
I0123 14:48:42.377763       8 log.go:172] (0xc00297cf20) Reply frame received for 3
I0123 14:48:42.377784       8 log.go:172] (0xc00297cf20) (0xc0021337c0) Create stream
I0123 14:48:42.377791       8 log.go:172] (0xc00297cf20) (0xc0021337c0) Stream added, broadcasting: 5
I0123 14:48:42.379921       8 log.go:172] (0xc00297cf20) Reply frame received for 5
I0123 14:48:42.533250       8 log.go:172] (0xc00297cf20) Data frame received for 3
I0123 14:48:42.533456       8 log.go:172] (0xc001535cc0) (3) Data frame handling
I0123 14:48:42.533501       8 log.go:172] (0xc001535cc0) (3) Data frame sent
I0123 14:48:42.690179       8 log.go:172] (0xc00297cf20) (0xc001535cc0) Stream removed, broadcasting: 3
I0123 14:48:42.690631       8 log.go:172] (0xc00297cf20) (0xc0021337c0) Stream removed, broadcasting: 5
I0123 14:48:42.690718       8 log.go:172] (0xc00297cf20) Data frame received for 1
I0123 14:48:42.690786       8 log.go:172] (0xc002133720) (1) Data frame handling
I0123 14:48:42.690877       8 log.go:172] (0xc002133720) (1) Data frame sent
I0123 14:48:42.690924       8 log.go:172] (0xc00297cf20) (0xc002133720) Stream removed, broadcasting: 1
I0123 14:48:42.690960       8 log.go:172] (0xc00297cf20) Go away received
I0123 14:48:42.691441       8 log.go:172] (0xc00297cf20) (0xc002133720) Stream removed, broadcasting: 1
I0123 14:48:42.691607       8 log.go:172] (0xc00297cf20) (0xc001535cc0) Stream removed, broadcasting: 3
I0123 14:48:42.691614       8 log.go:172] (0xc00297cf20) (0xc0021337c0) Stream removed, broadcasting: 5
Jan 23 14:48:42.691: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:48:42.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-9016" for this suite.
Jan 23 14:49:28.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:49:28.943: INFO: namespace e2e-kubelet-etc-hosts-9016 deletion completed in 46.24230485s

• [SLOW TEST:70.123 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:49:28.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 23 14:49:29.079: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1435,SelfLink:/api/v1/namespaces/watch-1435/configmaps/e2e-watch-test-resource-version,UID:86127901-6b87-4d29-acd0-fda9835a5507,ResourceVersion:21572541,Generation:0,CreationTimestamp:2020-01-23 14:49:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 23 14:49:29.080: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1435,SelfLink:/api/v1/namespaces/watch-1435/configmaps/e2e-watch-test-resource-version,UID:86127901-6b87-4d29-acd0-fda9835a5507,ResourceVersion:21572542,Generation:0,CreationTimestamp:2020-01-23 14:49:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:49:29.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1435" for this suite.
Jan 23 14:49:35.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:49:35.211: INFO: namespace watch-1435 deletion completed in 6.125093911s

• [SLOW TEST:6.268 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:49:35.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-f13fb11a-e37c-412c-9653-9d7972de52c9
STEP: Creating a pod to test consume configMaps
Jan 23 14:49:35.365: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a674ddb-058e-443d-b8d0-6d8931bb5082" in namespace "configmap-9997" to be "success or failure"
Jan 23 14:49:35.371: INFO: Pod "pod-configmaps-5a674ddb-058e-443d-b8d0-6d8931bb5082": Phase="Pending", Reason="", readiness=false. Elapsed: 6.260361ms
Jan 23 14:49:37.387: INFO: Pod "pod-configmaps-5a674ddb-058e-443d-b8d0-6d8931bb5082": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022028243s
Jan 23 14:49:39.396: INFO: Pod "pod-configmaps-5a674ddb-058e-443d-b8d0-6d8931bb5082": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030787055s
Jan 23 14:49:41.413: INFO: Pod "pod-configmaps-5a674ddb-058e-443d-b8d0-6d8931bb5082": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0481582s
Jan 23 14:49:43.423: INFO: Pod "pod-configmaps-5a674ddb-058e-443d-b8d0-6d8931bb5082": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058435456s
STEP: Saw pod success
Jan 23 14:49:43.423: INFO: Pod "pod-configmaps-5a674ddb-058e-443d-b8d0-6d8931bb5082" satisfied condition "success or failure"
Jan 23 14:49:43.428: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5a674ddb-058e-443d-b8d0-6d8931bb5082 container configmap-volume-test: 
STEP: delete the pod
Jan 23 14:49:43.710: INFO: Waiting for pod pod-configmaps-5a674ddb-058e-443d-b8d0-6d8931bb5082 to disappear
Jan 23 14:49:43.731: INFO: Pod pod-configmaps-5a674ddb-058e-443d-b8d0-6d8931bb5082 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:49:43.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9997" for this suite.
Jan 23 14:49:49.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:49:49.971: INFO: namespace configmap-9997 deletion completed in 6.230284984s

• [SLOW TEST:14.759 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:49:49.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-bd0e7390-78b8-4025-afd0-8381d5eab9a9
Jan 23 14:49:50.103: INFO: Pod name my-hostname-basic-bd0e7390-78b8-4025-afd0-8381d5eab9a9: Found 0 pods out of 1
Jan 23 14:49:55.114: INFO: Pod name my-hostname-basic-bd0e7390-78b8-4025-afd0-8381d5eab9a9: Found 1 pods out of 1
Jan 23 14:49:55.114: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-bd0e7390-78b8-4025-afd0-8381d5eab9a9" are running
Jan 23 14:49:59.128: INFO: Pod "my-hostname-basic-bd0e7390-78b8-4025-afd0-8381d5eab9a9-cm876" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 14:49:50 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 14:49:50 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bd0e7390-78b8-4025-afd0-8381d5eab9a9]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 14:49:50 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bd0e7390-78b8-4025-afd0-8381d5eab9a9]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-23 14:49:50 +0000 UTC Reason: Message:}])
Jan 23 14:49:59.128: INFO: Trying to dial the pod
Jan 23 14:50:04.159: INFO: Controller my-hostname-basic-bd0e7390-78b8-4025-afd0-8381d5eab9a9: Got expected result from replica 1 [my-hostname-basic-bd0e7390-78b8-4025-afd0-8381d5eab9a9-cm876]: "my-hostname-basic-bd0e7390-78b8-4025-afd0-8381d5eab9a9-cm876", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:50:04.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2037" for this suite.
Jan 23 14:50:10.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:50:10.328: INFO: namespace replication-controller-2037 deletion completed in 6.160633802s

• [SLOW TEST:20.357 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:50:10.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 23 14:50:10.488: INFO: Waiting up to 5m0s for pod "pod-13b18733-ae96-41ab-af2c-f99f7a580fcd" in namespace "emptydir-5342" to be "success or failure"
Jan 23 14:50:10.514: INFO: Pod "pod-13b18733-ae96-41ab-af2c-f99f7a580fcd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.962966ms
Jan 23 14:50:12.532: INFO: Pod "pod-13b18733-ae96-41ab-af2c-f99f7a580fcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043298372s
Jan 23 14:50:14.545: INFO: Pod "pod-13b18733-ae96-41ab-af2c-f99f7a580fcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056700565s
Jan 23 14:50:16.562: INFO: Pod "pod-13b18733-ae96-41ab-af2c-f99f7a580fcd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073537021s
Jan 23 14:50:18.575: INFO: Pod "pod-13b18733-ae96-41ab-af2c-f99f7a580fcd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085875869s
Jan 23 14:50:20.591: INFO: Pod "pod-13b18733-ae96-41ab-af2c-f99f7a580fcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102001646s
STEP: Saw pod success
Jan 23 14:50:20.591: INFO: Pod "pod-13b18733-ae96-41ab-af2c-f99f7a580fcd" satisfied condition "success or failure"
Jan 23 14:50:20.605: INFO: Trying to get logs from node iruya-node pod pod-13b18733-ae96-41ab-af2c-f99f7a580fcd container test-container: 
STEP: delete the pod
Jan 23 14:50:20.695: INFO: Waiting for pod pod-13b18733-ae96-41ab-af2c-f99f7a580fcd to disappear
Jan 23 14:50:20.708: INFO: Pod pod-13b18733-ae96-41ab-af2c-f99f7a580fcd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:50:20.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5342" for this suite.
Jan 23 14:50:26.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:50:26.894: INFO: namespace emptydir-5342 deletion completed in 6.177261722s

• [SLOW TEST:16.565 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:50:26.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7588
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-7588
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7588
Jan 23 14:50:27.061: INFO: Found 0 stateful pods, waiting for 1
Jan 23 14:50:37.070: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 23 14:50:37.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7588 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 23 14:50:39.665: INFO: stderr: "I0123 14:50:39.358357    3119 log.go:172] (0xc000116dc0) (0xc000746780) Create stream\nI0123 14:50:39.358536    3119 log.go:172] (0xc000116dc0) (0xc000746780) Stream added, broadcasting: 1\nI0123 14:50:39.369049    3119 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0123 14:50:39.369098    3119 log.go:172] (0xc000116dc0) (0xc0005d8280) Create stream\nI0123 14:50:39.369112    3119 log.go:172] (0xc000116dc0) (0xc0005d8280) Stream added, broadcasting: 3\nI0123 14:50:39.371382    3119 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0123 14:50:39.371433    3119 log.go:172] (0xc000116dc0) (0xc0006e4000) Create stream\nI0123 14:50:39.371448    3119 log.go:172] (0xc000116dc0) (0xc0006e4000) Stream added, broadcasting: 5\nI0123 14:50:39.374571    3119 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0123 14:50:39.532655    3119 log.go:172] (0xc000116dc0) Data frame received for 5\nI0123 14:50:39.532719    3119 log.go:172] (0xc0006e4000) (5) Data frame handling\nI0123 14:50:39.532739    3119 log.go:172] (0xc0006e4000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0123 14:50:39.555678    3119 log.go:172] (0xc000116dc0) Data frame received for 3\nI0123 14:50:39.555699    3119 log.go:172] (0xc0005d8280) (3) Data frame handling\nI0123 14:50:39.555725    3119 log.go:172] (0xc0005d8280) (3) Data frame sent\nI0123 14:50:39.654717    3119 log.go:172] (0xc000116dc0) Data frame received for 1\nI0123 14:50:39.654834    3119 log.go:172] (0xc000116dc0) (0xc0005d8280) Stream removed, broadcasting: 3\nI0123 14:50:39.655001    3119 log.go:172] (0xc000116dc0) (0xc0006e4000) Stream removed, broadcasting: 5\nI0123 14:50:39.655114    3119 log.go:172] (0xc000746780) (1) Data frame handling\nI0123 14:50:39.655145    3119 log.go:172] (0xc000746780) (1) Data frame sent\nI0123 14:50:39.655161    3119 log.go:172] (0xc000116dc0) (0xc000746780) Stream removed, broadcasting: 1\nI0123 14:50:39.655185    3119 log.go:172] (0xc000116dc0) Go away received\nI0123 14:50:39.655903    3119 log.go:172] (0xc000116dc0) (0xc000746780) Stream removed, broadcasting: 1\nI0123 14:50:39.655927    3119 log.go:172] (0xc000116dc0) (0xc0005d8280) Stream removed, broadcasting: 3\nI0123 14:50:39.655945    3119 log.go:172] (0xc000116dc0) (0xc0006e4000) Stream removed, broadcasting: 5\n"
Jan 23 14:50:39.665: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 23 14:50:39.665: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 23 14:50:39.672: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 23 14:50:49.681: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 23 14:50:49.681: INFO: Waiting for statefulset status.replicas updated to 0
Jan 23 14:50:49.703: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999995306s
Jan 23 14:50:50.718: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99252679s
Jan 23 14:50:51.732: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.978140907s
Jan 23 14:50:52.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.96379537s
Jan 23 14:50:53.754: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.954457881s
Jan 23 14:50:54.790: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.942415182s
Jan 23 14:50:55.807: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.906005001s
Jan 23 14:50:56.815: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.889215821s
Jan 23 14:50:57.826: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.881306124s
Jan 23 14:50:58.838: INFO: Verifying statefulset ss doesn't scale past 1 for another 869.57618ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7588
Jan 23 14:50:59.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7588 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 14:51:00.362: INFO: stderr: "I0123 14:51:00.103834    3150 log.go:172] (0xc000117080) (0xc0005c6aa0) Create stream\nI0123 14:51:00.103975    3150 log.go:172] (0xc000117080) (0xc0005c6aa0) Stream added, broadcasting: 1\nI0123 14:51:00.108854    3150 log.go:172] (0xc000117080) Reply frame received for 1\nI0123 14:51:00.108888    3150 log.go:172] (0xc000117080) (0xc0007fe000) Create stream\nI0123 14:51:00.108896    3150 log.go:172] (0xc000117080) (0xc0007fe000) Stream added, broadcasting: 3\nI0123 14:51:00.110856    3150 log.go:172] (0xc000117080) Reply frame received for 3\nI0123 14:51:00.110884    3150 log.go:172] (0xc000117080) (0xc000880000) Create stream\nI0123 14:51:00.110898    3150 log.go:172] (0xc000117080) (0xc000880000) Stream added, broadcasting: 5\nI0123 14:51:00.112339    3150 log.go:172] (0xc000117080) Reply frame received for 5\nI0123 14:51:00.210314    3150 log.go:172] (0xc000117080) Data frame received for 5\nI0123 14:51:00.210391    3150 log.go:172] (0xc000880000) (5) Data frame handling\nI0123 14:51:00.210415    3150 log.go:172] (0xc000880000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0123 14:51:00.210451    3150 log.go:172] (0xc000117080) Data frame received for 3\nI0123 14:51:00.210465    3150 log.go:172] (0xc0007fe000) (3) Data frame handling\nI0123 14:51:00.210472    3150 log.go:172] (0xc0007fe000) (3) Data frame sent\nI0123 14:51:00.341570    3150 log.go:172] (0xc000117080) (0xc0007fe000) Stream removed, broadcasting: 3\nI0123 14:51:00.341930    3150 log.go:172] (0xc000117080) (0xc000880000) Stream removed, broadcasting: 5\nI0123 14:51:00.342043    3150 log.go:172] (0xc000117080) Data frame received for 1\nI0123 14:51:00.342116    3150 log.go:172] (0xc0005c6aa0) (1) Data frame handling\nI0123 14:51:00.342137    3150 log.go:172] (0xc0005c6aa0) (1) Data frame sent\nI0123 14:51:00.342158    3150 log.go:172] (0xc000117080) (0xc0005c6aa0) Stream removed, broadcasting: 1\nI0123 14:51:00.342201    3150 log.go:172] (0xc000117080) Go away received\nI0123 14:51:00.343446    3150 log.go:172] (0xc000117080) (0xc0005c6aa0) Stream removed, broadcasting: 1\nI0123 14:51:00.343475    3150 log.go:172] (0xc000117080) (0xc0007fe000) Stream removed, broadcasting: 3\nI0123 14:51:00.343492    3150 log.go:172] (0xc000117080) (0xc000880000) Stream removed, broadcasting: 5\n"
Jan 23 14:51:00.362: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 23 14:51:00.362: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 23 14:51:00.370: INFO: Found 1 stateful pods, waiting for 3
Jan 23 14:51:10.377: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 14:51:10.377: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 14:51:10.377: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 23 14:51:20.381: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 14:51:20.382: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 23 14:51:20.382: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 23 14:51:20.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7588 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 23 14:51:21.121: INFO: stderr: "I0123 14:51:20.728812    3171 log.go:172] (0xc00094a420) (0xc00032c6e0) Create stream\nI0123 14:51:20.729203    3171 log.go:172] (0xc00094a420) (0xc00032c6e0) Stream added, broadcasting: 1\nI0123 14:51:20.774121    3171 log.go:172] (0xc00094a420) Reply frame received for 1\nI0123 14:51:20.774191    3171 log.go:172] (0xc00094a420) (0xc000818000) Create stream\nI0123 14:51:20.774210    3171 log.go:172] (0xc00094a420) (0xc000818000) Stream added, broadcasting: 3\nI0123 14:51:20.776265    3171 log.go:172] (0xc00094a420) Reply frame received for 3\nI0123 14:51:20.776315    3171 log.go:172] (0xc00094a420) (0xc0006543c0) Create stream\nI0123 14:51:20.776331    3171 log.go:172] (0xc00094a420) (0xc0006543c0) Stream added, broadcasting: 5\nI0123 14:51:20.778775    3171 log.go:172] (0xc00094a420) Reply frame received for 5\nI0123 14:51:20.942908    3171 log.go:172] (0xc00094a420) Data frame received for 5\nI0123 14:51:20.943046    3171 log.go:172] (0xc0006543c0) (5) Data frame handling\nI0123 14:51:20.943086    3171 log.go:172] (0xc0006543c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0123 14:51:20.943121    3171 log.go:172] (0xc00094a420) Data frame received for 3\nI0123 14:51:20.943138    3171 log.go:172] (0xc000818000) (3) Data frame handling\nI0123 14:51:20.943158    3171 log.go:172] (0xc000818000) (3) Data frame sent\nI0123 14:51:21.106443    3171 log.go:172] (0xc00094a420) Data frame received for 1\nI0123 14:51:21.106525    3171 log.go:172] (0xc00094a420) (0xc000818000) Stream removed, broadcasting: 3\nI0123 14:51:21.106599    3171 log.go:172] (0xc00032c6e0) (1) Data frame handling\nI0123 14:51:21.106639    3171 log.go:172] (0xc00032c6e0) (1) Data frame sent\nI0123 14:51:21.106677    3171 log.go:172] (0xc00094a420) (0xc0006543c0) Stream removed, broadcasting: 5\nI0123 14:51:21.106715    3171 log.go:172] (0xc00094a420) (0xc00032c6e0) Stream removed, broadcasting: 1\nI0123 14:51:21.106754    3171 log.go:172] (0xc00094a420) Go away received\nI0123 14:51:21.107486    3171 log.go:172] (0xc00094a420) (0xc00032c6e0) Stream removed, broadcasting: 1\nI0123 14:51:21.107498    3171 log.go:172] (0xc00094a420) (0xc000818000) Stream removed, broadcasting: 3\nI0123 14:51:21.107504    3171 log.go:172] (0xc00094a420) (0xc0006543c0) Stream removed, broadcasting: 5\n"
Jan 23 14:51:21.122: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 23 14:51:21.122: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 23 14:51:21.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7588 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 23 14:51:21.609: INFO: stderr: "I0123 14:51:21.363051    3191 log.go:172] (0xc0008f00b0) (0xc0009f00a0) Create stream\nI0123 14:51:21.363300    3191 log.go:172] (0xc0008f00b0) (0xc0009f00a0) Stream added, broadcasting: 1\nI0123 14:51:21.367024    3191 log.go:172] (0xc0008f00b0) Reply frame received for 1\nI0123 14:51:21.367057    3191 log.go:172] (0xc0008f00b0) (0xc000826000) Create stream\nI0123 14:51:21.367069    3191 log.go:172] (0xc0008f00b0) (0xc000826000) Stream added, broadcasting: 3\nI0123 14:51:21.368493    3191 log.go:172] (0xc0008f00b0) Reply frame received for 3\nI0123 14:51:21.368522    3191 log.go:172] (0xc0008f00b0) (0xc0005ae140) Create stream\nI0123 14:51:21.368534    3191 log.go:172] (0xc0008f00b0) (0xc0005ae140) Stream added, broadcasting: 5\nI0123 14:51:21.369291    3191 log.go:172] (0xc0008f00b0) Reply frame received for 5\nI0123 14:51:21.469486    3191 log.go:172] (0xc0008f00b0) Data frame received for 5\nI0123 14:51:21.469523    3191 log.go:172] (0xc0005ae140) (5) Data frame handling\nI0123 14:51:21.469534    3191 log.go:172] (0xc0005ae140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0123 14:51:21.531426    3191 log.go:172] (0xc0008f00b0) Data frame received for 3\nI0123 14:51:21.531456    3191 log.go:172] (0xc000826000) (3) Data frame handling\nI0123 14:51:21.531470    3191 log.go:172] (0xc000826000) (3) Data frame sent\nI0123 14:51:21.605293    3191 log.go:172] (0xc0008f00b0) (0xc000826000) Stream removed, broadcasting: 3\nI0123 14:51:21.605404    3191 log.go:172] (0xc0008f00b0) Data frame received for 1\nI0123 14:51:21.605419    3191 log.go:172] (0xc0009f00a0) (1) Data frame handling\nI0123 14:51:21.605434    3191 log.go:172] (0xc0009f00a0) (1) Data frame sent\nI0123 14:51:21.605461    3191 log.go:172] (0xc0008f00b0) (0xc0009f00a0) Stream removed, broadcasting: 1\nI0123 14:51:21.605687    3191 log.go:172] (0xc0008f00b0) (0xc0005ae140) Stream removed, broadcasting: 5\nI0123 14:51:21.605747    3191 log.go:172] (0xc0008f00b0) Go away received\nI0123 14:51:21.605779    3191 log.go:172] (0xc0008f00b0) (0xc0009f00a0) Stream removed, broadcasting: 1\nI0123 14:51:21.605793    3191 log.go:172] (0xc0008f00b0) (0xc000826000) Stream removed, broadcasting: 3\nI0123 14:51:21.605799    3191 log.go:172] (0xc0008f00b0) (0xc0005ae140) Stream removed, broadcasting: 5\n"
Jan 23 14:51:21.609: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 23 14:51:21.609: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 23 14:51:21.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7588 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 23 14:51:22.347: INFO: stderr: "I0123 14:51:21.902453    3209 log.go:172] (0xc000a0e160) (0xc000a02640) Create stream\nI0123 14:51:21.902600    3209 log.go:172] (0xc000a0e160) (0xc000a02640) Stream added, broadcasting: 1\nI0123 14:51:21.911530    3209 log.go:172] (0xc000a0e160) Reply frame received for 1\nI0123 14:51:21.911651    3209 log.go:172] (0xc000a0e160) (0xc0002de140) Create stream\nI0123 14:51:21.911671    3209 log.go:172] (0xc000a0e160) (0xc0002de140) Stream added, broadcasting: 3\nI0123 14:51:21.913061    3209 log.go:172] (0xc000a0e160) Reply frame received for 3\nI0123 14:51:21.913091    3209 log.go:172] (0xc000a0e160) (0xc000a026e0) Create stream\nI0123 14:51:21.913103    3209 log.go:172] (0xc000a0e160) (0xc000a026e0) Stream added, broadcasting: 5\nI0123 14:51:21.916243    3209 log.go:172] (0xc000a0e160) Reply frame received for 5\nI0123 14:51:22.142634    3209 log.go:172] (0xc000a0e160) Data frame received for 5\nI0123 14:51:22.142795    3209 log.go:172] (0xc000a026e0) (5) Data frame handling\nI0123 14:51:22.142848    3209 log.go:172] (0xc000a026e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0123 14:51:22.144781    3209 log.go:172] (0xc000a0e160) Data frame received for 3\nI0123 14:51:22.144824    3209 log.go:172] (0xc0002de140) (3) Data frame handling\nI0123 14:51:22.144875    3209 log.go:172] (0xc0002de140) (3) Data frame sent\nI0123 14:51:22.330607    3209 log.go:172] (0xc000a0e160) (0xc0002de140) Stream removed, broadcasting: 3\nI0123 14:51:22.330803    3209 log.go:172] (0xc000a0e160) Data frame received for 1\nI0123 14:51:22.330829    3209 log.go:172] (0xc000a02640) (1) Data frame handling\nI0123 14:51:22.330862    3209 log.go:172] (0xc000a02640) (1) Data frame sent\nI0123 14:51:22.330880    3209 log.go:172] (0xc000a0e160) (0xc000a02640) Stream removed, broadcasting: 1\nI0123 14:51:22.331101    3209 log.go:172] (0xc000a0e160) (0xc000a026e0) Stream removed, broadcasting: 5\nI0123 14:51:22.331156    3209 log.go:172] (0xc000a0e160) Go away received\nI0123 14:51:22.331823    3209 log.go:172] (0xc000a0e160) (0xc000a02640) Stream removed, broadcasting: 1\nI0123 14:51:22.331834    3209 log.go:172] (0xc000a0e160) (0xc0002de140) Stream removed, broadcasting: 3\nI0123 14:51:22.331844    3209 log.go:172] (0xc000a0e160) (0xc000a026e0) Stream removed, broadcasting: 5\n"
Jan 23 14:51:22.347: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 23 14:51:22.347: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 23 14:51:22.348: INFO: Waiting for statefulset status.replicas updated to 0
Jan 23 14:51:22.363: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 23 14:51:22.363: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 23 14:51:22.363: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 23 14:51:22.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999061s
Jan 23 14:51:23.394: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988251551s
Jan 23 14:51:24.402: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979191225s
Jan 23 14:51:25.427: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.97046004s
Jan 23 14:51:26.441: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.945982211s
Jan 23 14:51:27.472: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.931796397s
Jan 23 14:51:28.487: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.900577277s
Jan 23 14:51:29.499: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.885539427s
Jan 23 14:51:30.513: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.874110689s
Jan 23 14:51:31.525: INFO: Verifying statefulset ss doesn't scale past 3 for another 859.397766ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7588
Jan 23 14:51:32.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7588 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 14:51:33.171: INFO: stderr: "I0123 14:51:32.746473    3230 log.go:172] (0xc0006a49a0) (0xc00072c6e0) Create stream\nI0123 14:51:32.746597    3230 log.go:172] (0xc0006a49a0) (0xc00072c6e0) Stream added, broadcasting: 1\nI0123 14:51:32.768404    3230 log.go:172] (0xc0006a49a0) Reply frame received for 1\nI0123 14:51:32.768475    3230 log.go:172] (0xc0006a49a0) (0xc0005101e0) Create stream\nI0123 14:51:32.768485    3230 log.go:172] (0xc0006a49a0) (0xc0005101e0) Stream added, broadcasting: 3\nI0123 14:51:32.770523    3230 log.go:172] (0xc0006a49a0) Reply frame received for 3\nI0123 14:51:32.770662    3230 log.go:172] (0xc0006a49a0) (0xc000510280) Create stream\nI0123 14:51:32.770691    3230 log.go:172] (0xc0006a49a0) (0xc000510280) Stream added, broadcasting: 5\nI0123 14:51:32.773861    3230 log.go:172] (0xc0006a49a0) Reply frame received for 5\nI0123 14:51:33.043422    3230 log.go:172] (0xc0006a49a0) Data frame received for 5\nI0123 14:51:33.043501    3230 log.go:172] (0xc000510280) (5) Data frame handling\nI0123 14:51:33.043517    3230 log.go:172] (0xc000510280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0123 14:51:33.044778    3230 log.go:172] (0xc0006a49a0) Data frame received for 3\nI0123 14:51:33.044794    3230 log.go:172] (0xc0005101e0) (3) Data frame handling\nI0123 14:51:33.044814    3230 log.go:172] (0xc0005101e0) (3) Data frame sent\nI0123 14:51:33.166307    3230 log.go:172] (0xc0006a49a0) Data frame received for 1\nI0123 14:51:33.166723    3230 log.go:172] (0xc00072c6e0) (1) Data frame handling\nI0123 14:51:33.166774    3230 log.go:172] (0xc00072c6e0) (1) Data frame sent\nI0123 14:51:33.166809    3230 log.go:172] (0xc0006a49a0) (0xc00072c6e0) Stream removed, broadcasting: 1\nI0123 14:51:33.167301    3230 log.go:172] (0xc0006a49a0) (0xc0005101e0) Stream removed, broadcasting: 3\nI0123 14:51:33.167364    3230 log.go:172] (0xc0006a49a0) (0xc000510280) Stream removed, broadcasting: 5\nI0123 14:51:33.167443    3230 log.go:172] (0xc0006a49a0) (0xc00072c6e0) Stream removed, broadcasting: 1\nI0123 14:51:33.167507    3230 log.go:172] (0xc0006a49a0) (0xc0005101e0) Stream removed, broadcasting: 3\nI0123 14:51:33.167573    3230 log.go:172] (0xc0006a49a0) Go away received\nI0123 14:51:33.167612    3230 log.go:172] (0xc0006a49a0) (0xc000510280) Stream removed, broadcasting: 5\n"
Jan 23 14:51:33.172: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 23 14:51:33.172: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 23 14:51:33.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 14:51:33.509: INFO: stderr: "I0123 14:51:33.336135    3245 log.go:172] (0xc000a3e630) (0xc0005268c0) Create stream\nI0123 14:51:33.336270    3245 log.go:172] (0xc000a3e630) (0xc0005268c0) Stream added, broadcasting: 1\nI0123 14:51:33.344007    3245 log.go:172] (0xc000a3e630) Reply frame received for 1\nI0123 14:51:33.344050    3245 log.go:172] (0xc000a3e630) (0xc000526140) Create stream\nI0123 14:51:33.344064    3245 log.go:172] (0xc000a3e630) (0xc000526140) Stream added, broadcasting: 3\nI0123 14:51:33.344857    3245 log.go:172] (0xc000a3e630) Reply frame received for 3\nI0123 14:51:33.344884    3245 log.go:172] (0xc000a3e630) (0xc00035a000) Create stream\nI0123 14:51:33.344893    3245 log.go:172] (0xc000a3e630) (0xc00035a000) Stream added, broadcasting: 5\nI0123 14:51:33.345751    3245 log.go:172] (0xc000a3e630) Reply frame received for 5\nI0123 14:51:33.417274    3245 log.go:172] (0xc000a3e630) Data frame received for 3\nI0123 14:51:33.417344    3245 log.go:172] (0xc000526140) (3) Data frame handling\nI0123 14:51:33.417363    3245 log.go:172] (0xc000526140) (3) Data frame sent\nI0123 14:51:33.417392    3245 log.go:172] (0xc000a3e630) Data frame received for 5\nI0123 14:51:33.417404    3245 log.go:172] (0xc00035a000) (5) Data frame handling\nI0123 14:51:33.417421    3245 log.go:172] (0xc00035a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0123 14:51:33.501815    3245 log.go:172] (0xc000a3e630) Data frame received for 1\nI0123 14:51:33.501957    3245 log.go:172] (0xc000a3e630) (0xc00035a000) Stream removed, broadcasting: 5\nI0123 14:51:33.501990    3245 log.go:172] (0xc0005268c0) (1) Data frame handling\nI0123 14:51:33.502011    3245 log.go:172] (0xc0005268c0) (1) Data frame sent\nI0123 14:51:33.502057    3245 log.go:172] (0xc000a3e630) (0xc000526140) Stream removed, broadcasting: 3\nI0123 14:51:33.502212    3245 log.go:172] (0xc000a3e630) (0xc0005268c0) Stream removed, broadcasting: 1\nI0123 14:51:33.502247    3245 log.go:172] (0xc000a3e630) Go away received\nI0123 14:51:33.503015    3245 log.go:172] (0xc000a3e630) (0xc0005268c0) Stream removed, broadcasting: 1\nI0123 14:51:33.503077    3245 log.go:172] (0xc000a3e630) (0xc000526140) Stream removed, broadcasting: 3\nI0123 14:51:33.503095    3245 log.go:172] (0xc000a3e630) (0xc00035a000) Stream removed, broadcasting: 5\n"
Jan 23 14:51:33.509: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 23 14:51:33.509: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 23 14:51:33.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7588 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 23 14:51:34.227: INFO: stderr: "I0123 14:51:33.721804    3265 log.go:172] (0xc000a3e0b0) (0xc00098a5a0) Create stream\nI0123 14:51:33.722032    3265 log.go:172] (0xc000a3e0b0) (0xc00098a5a0) Stream added, broadcasting: 1\nI0123 14:51:33.727612    3265 log.go:172] (0xc000a3e0b0) Reply frame received for 1\nI0123 14:51:33.727704    3265 log.go:172] (0xc000a3e0b0) (0xc000656280) Create stream\nI0123 14:51:33.727726    3265 log.go:172] (0xc000a3e0b0) (0xc000656280) Stream added, broadcasting: 3\nI0123 14:51:33.731349    3265 log.go:172] (0xc000a3e0b0) Reply frame received for 3\nI0123 14:51:33.731385    3265 log.go:172] (0xc000a3e0b0) (0xc00098a640) Create stream\nI0123 14:51:33.731396    3265 log.go:172] (0xc000a3e0b0) (0xc00098a640) Stream added, broadcasting: 5\nI0123 14:51:33.733540    3265 log.go:172] (0xc000a3e0b0) Reply frame received for 5\nI0123 14:51:34.004211    3265 log.go:172] (0xc000a3e0b0) Data frame received for 5\nI0123 14:51:34.004368    3265 log.go:172] (0xc00098a640) (5) Data frame handling\nI0123 14:51:34.004400    3265 log.go:172] (0xc00098a640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0123 14:51:34.004440    3265 log.go:172] (0xc000a3e0b0) Data frame received for 3\nI0123 14:51:34.004470    3265 log.go:172] (0xc000656280) (3) Data frame handling\nI0123 14:51:34.004494    3265 log.go:172] (0xc000656280) (3) Data frame sent\nI0123 14:51:34.214648    3265 log.go:172] (0xc000a3e0b0) (0xc000656280) Stream removed, broadcasting: 3\nI0123 14:51:34.215042    3265 log.go:172] (0xc000a3e0b0) Data frame received for 1\nI0123 14:51:34.215105    3265 log.go:172] (0xc000a3e0b0) (0xc00098a640) Stream removed, broadcasting: 5\nI0123 14:51:34.215168    3265 log.go:172] (0xc00098a5a0) (1) Data frame handling\nI0123 14:51:34.215229    3265 log.go:172] (0xc00098a5a0) (1) Data frame sent\nI0123 14:51:34.215251    3265 log.go:172] (0xc000a3e0b0) (0xc00098a5a0) Stream removed, broadcasting: 1\nI0123 14:51:34.215277    3265 log.go:172] (0xc000a3e0b0) Go away received\nI0123 14:51:34.215958    3265 log.go:172] (0xc000a3e0b0) (0xc00098a5a0) Stream removed, broadcasting: 1\nI0123 14:51:34.215970    3265 log.go:172] (0xc000a3e0b0) (0xc000656280) Stream removed, broadcasting: 3\nI0123 14:51:34.215983    3265 log.go:172] (0xc000a3e0b0) (0xc00098a640) Stream removed, broadcasting: 5\n"
Jan 23 14:51:34.228: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 23 14:51:34.228: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 23 14:51:34.228: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 23 14:52:14.267: INFO: Deleting all statefulset in ns statefulset-7588
Jan 23 14:52:14.272: INFO: Scaling statefulset ss to 0
Jan 23 14:52:14.284: INFO: Waiting for statefulset status.replicas updated to 0
Jan 23 14:52:14.288: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:52:14.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7588" for this suite.
Jan 23 14:52:20.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:52:20.923: INFO: namespace statefulset-7588 deletion completed in 6.591152748s

• [SLOW TEST:114.029 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:52:20.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-681f1be7-8b06-44a9-a700-b0b505d69fc0
STEP: Creating secret with name secret-projected-all-test-volume-5f746f44-db67-40bc-ac1b-767458227531
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 23 14:52:21.052: INFO: Waiting up to 5m0s for pod "projected-volume-911f1ba5-4f4a-4409-ac8c-44b4a9afda04" in namespace "projected-5913" to be "success or failure"
Jan 23 14:52:21.095: INFO: Pod "projected-volume-911f1ba5-4f4a-4409-ac8c-44b4a9afda04": Phase="Pending", Reason="", readiness=false. Elapsed: 42.552445ms
Jan 23 14:52:23.102: INFO: Pod "projected-volume-911f1ba5-4f4a-4409-ac8c-44b4a9afda04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049679929s
Jan 23 14:52:25.109: INFO: Pod "projected-volume-911f1ba5-4f4a-4409-ac8c-44b4a9afda04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057412317s
Jan 23 14:52:27.116: INFO: Pod "projected-volume-911f1ba5-4f4a-4409-ac8c-44b4a9afda04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064074211s
Jan 23 14:52:29.130: INFO: Pod "projected-volume-911f1ba5-4f4a-4409-ac8c-44b4a9afda04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077931236s
STEP: Saw pod success
Jan 23 14:52:29.130: INFO: Pod "projected-volume-911f1ba5-4f4a-4409-ac8c-44b4a9afda04" satisfied condition "success or failure"
Jan 23 14:52:29.135: INFO: Trying to get logs from node iruya-node pod projected-volume-911f1ba5-4f4a-4409-ac8c-44b4a9afda04 container projected-all-volume-test: 
STEP: delete the pod
Jan 23 14:52:29.496: INFO: Waiting for pod projected-volume-911f1ba5-4f4a-4409-ac8c-44b4a9afda04 to disappear
Jan 23 14:52:29.533: INFO: Pod projected-volume-911f1ba5-4f4a-4409-ac8c-44b4a9afda04 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:52:29.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5913" for this suite.
Jan 23 14:52:35.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:52:35.705: INFO: namespace projected-5913 deletion completed in 6.161838874s

• [SLOW TEST:14.782 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:52:35.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan 23 14:52:35.789: INFO: Waiting up to 5m0s for pod "client-containers-582f9ddb-040f-4d2b-843f-7ec118de24cc" in namespace "containers-7622" to be "success or failure"
Jan 23 14:52:35.833: INFO: Pod "client-containers-582f9ddb-040f-4d2b-843f-7ec118de24cc": Phase="Pending", Reason="", readiness=false. Elapsed: 44.554835ms
Jan 23 14:52:37.856: INFO: Pod "client-containers-582f9ddb-040f-4d2b-843f-7ec118de24cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066895027s
Jan 23 14:52:39.965: INFO: Pod "client-containers-582f9ddb-040f-4d2b-843f-7ec118de24cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17633025s
Jan 23 14:52:41.974: INFO: Pod "client-containers-582f9ddb-040f-4d2b-843f-7ec118de24cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185617246s
Jan 23 14:52:43.981: INFO: Pod "client-containers-582f9ddb-040f-4d2b-843f-7ec118de24cc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.192264839s
Jan 23 14:52:45.989: INFO: Pod "client-containers-582f9ddb-040f-4d2b-843f-7ec118de24cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.199719105s
STEP: Saw pod success
Jan 23 14:52:45.989: INFO: Pod "client-containers-582f9ddb-040f-4d2b-843f-7ec118de24cc" satisfied condition "success or failure"
Jan 23 14:52:45.992: INFO: Trying to get logs from node iruya-node pod client-containers-582f9ddb-040f-4d2b-843f-7ec118de24cc container test-container: 
STEP: delete the pod
Jan 23 14:52:46.116: INFO: Waiting for pod client-containers-582f9ddb-040f-4d2b-843f-7ec118de24cc to disappear
Jan 23 14:52:46.139: INFO: Pod client-containers-582f9ddb-040f-4d2b-843f-7ec118de24cc no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:52:46.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7622" for this suite.
Jan 23 14:52:54.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:52:54.265: INFO: namespace containers-7622 deletion completed in 8.116736839s

• [SLOW TEST:18.559 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:52:54.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 23 14:52:55.524: INFO: Pod name wrapped-volume-race-df2612b5-d289-4eb3-b6d7-f8ae4c235cc2: Found 0 pods out of 5
Jan 23 14:53:00.554: INFO: Pod name wrapped-volume-race-df2612b5-d289-4eb3-b6d7-f8ae4c235cc2: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-df2612b5-d289-4eb3-b6d7-f8ae4c235cc2 in namespace emptydir-wrapper-2295, will wait for the garbage collector to delete the pods
Jan 23 14:53:26.695: INFO: Deleting ReplicationController wrapped-volume-race-df2612b5-d289-4eb3-b6d7-f8ae4c235cc2 took: 11.373432ms
Jan 23 14:53:27.095: INFO: Terminating ReplicationController wrapped-volume-race-df2612b5-d289-4eb3-b6d7-f8ae4c235cc2 pods took: 400.591145ms
STEP: Creating RC which spawns configmap-volume pods
Jan 23 14:54:16.967: INFO: Pod name wrapped-volume-race-047682fc-3c40-47ab-baad-afd18f1ab24a: Found 0 pods out of 5
Jan 23 14:54:21.982: INFO: Pod name wrapped-volume-race-047682fc-3c40-47ab-baad-afd18f1ab24a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-047682fc-3c40-47ab-baad-afd18f1ab24a in namespace emptydir-wrapper-2295, will wait for the garbage collector to delete the pods
Jan 23 14:54:56.100: INFO: Deleting ReplicationController wrapped-volume-race-047682fc-3c40-47ab-baad-afd18f1ab24a took: 16.429713ms
Jan 23 14:54:56.900: INFO: Terminating ReplicationController wrapped-volume-race-047682fc-3c40-47ab-baad-afd18f1ab24a pods took: 800.831259ms
STEP: Creating RC which spawns configmap-volume pods
Jan 23 14:55:38.966: INFO: Pod name wrapped-volume-race-4cf976e8-12e3-40f8-a0fe-83a9825fe37e: Found 0 pods out of 5
Jan 23 14:55:43.983: INFO: Pod name wrapped-volume-race-4cf976e8-12e3-40f8-a0fe-83a9825fe37e: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4cf976e8-12e3-40f8-a0fe-83a9825fe37e in namespace emptydir-wrapper-2295, will wait for the garbage collector to delete the pods
Jan 23 14:56:14.129: INFO: Deleting ReplicationController wrapped-volume-race-4cf976e8-12e3-40f8-a0fe-83a9825fe37e took: 12.577056ms
Jan 23 14:56:14.530: INFO: Terminating ReplicationController wrapped-volume-race-4cf976e8-12e3-40f8-a0fe-83a9825fe37e pods took: 400.853733ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:56:58.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2295" for this suite.
Jan 23 14:57:08.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:57:08.661: INFO: namespace emptydir-wrapper-2295 deletion completed in 10.129396249s

• [SLOW TEST:254.396 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:57:08.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 14:57:08.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:57:18.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9723" for this suite.
Jan 23 14:58:00.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:58:01.103: INFO: namespace pods-9723 deletion completed in 42.19843238s

• [SLOW TEST:52.442 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:58:01.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-01453a26-0c2e-4e3a-bcae-6c5933049d79
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:58:11.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3285" for this suite.
Jan 23 14:58:33.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:58:33.571: INFO: namespace configmap-3285 deletion completed in 22.137517266s

• [SLOW TEST:32.467 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:58:33.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-4d7b92b3-ee51-488e-abc0-5cd2d7ccd7b4
STEP: Creating a pod to test consume configMaps
Jan 23 14:58:33.697: INFO: Waiting up to 5m0s for pod "pod-configmaps-70848d73-7aa3-45c1-b7dd-3b35ae481c28" in namespace "configmap-9182" to be "success or failure"
Jan 23 14:58:33.723: INFO: Pod "pod-configmaps-70848d73-7aa3-45c1-b7dd-3b35ae481c28": Phase="Pending", Reason="", readiness=false. Elapsed: 25.051666ms
Jan 23 14:58:35.735: INFO: Pod "pod-configmaps-70848d73-7aa3-45c1-b7dd-3b35ae481c28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037633508s
Jan 23 14:58:37.742: INFO: Pod "pod-configmaps-70848d73-7aa3-45c1-b7dd-3b35ae481c28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044397385s
Jan 23 14:58:39.752: INFO: Pod "pod-configmaps-70848d73-7aa3-45c1-b7dd-3b35ae481c28": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054134732s
Jan 23 14:58:41.760: INFO: Pod "pod-configmaps-70848d73-7aa3-45c1-b7dd-3b35ae481c28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062774719s
STEP: Saw pod success
Jan 23 14:58:41.760: INFO: Pod "pod-configmaps-70848d73-7aa3-45c1-b7dd-3b35ae481c28" satisfied condition "success or failure"
Jan 23 14:58:41.768: INFO: Trying to get logs from node iruya-node pod pod-configmaps-70848d73-7aa3-45c1-b7dd-3b35ae481c28 container configmap-volume-test: 
STEP: delete the pod
Jan 23 14:58:41.920: INFO: Waiting for pod pod-configmaps-70848d73-7aa3-45c1-b7dd-3b35ae481c28 to disappear
Jan 23 14:58:41.945: INFO: Pod pod-configmaps-70848d73-7aa3-45c1-b7dd-3b35ae481c28 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:58:41.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9182" for this suite.
Jan 23 14:58:47.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:58:48.145: INFO: namespace configmap-9182 deletion completed in 6.189046118s

• [SLOW TEST:14.574 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:58:48.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:58:56.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7942" for this suite.
Jan 23 14:59:02.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:59:02.486: INFO: namespace kubelet-test-7942 deletion completed in 6.15642785s

• [SLOW TEST:14.340 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:59:02.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-1ff76afd-cba3-4583-b906-0089403a86c1
STEP: Creating a pod to test consume configMaps
Jan 23 14:59:02.715: INFO: Waiting up to 5m0s for pod "pod-configmaps-36199360-b44a-4487-9a4f-54bb2af0c5a8" in namespace "configmap-4313" to be "success or failure"
Jan 23 14:59:02.749: INFO: Pod "pod-configmaps-36199360-b44a-4487-9a4f-54bb2af0c5a8": Phase="Pending", Reason="", readiness=false. Elapsed: 34.412312ms
Jan 23 14:59:04.755: INFO: Pod "pod-configmaps-36199360-b44a-4487-9a4f-54bb2af0c5a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040631795s
Jan 23 14:59:06.773: INFO: Pod "pod-configmaps-36199360-b44a-4487-9a4f-54bb2af0c5a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058336891s
Jan 23 14:59:08.785: INFO: Pod "pod-configmaps-36199360-b44a-4487-9a4f-54bb2af0c5a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069846899s
Jan 23 14:59:10.796: INFO: Pod "pod-configmaps-36199360-b44a-4487-9a4f-54bb2af0c5a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081350933s
STEP: Saw pod success
Jan 23 14:59:10.796: INFO: Pod "pod-configmaps-36199360-b44a-4487-9a4f-54bb2af0c5a8" satisfied condition "success or failure"
Jan 23 14:59:10.800: INFO: Trying to get logs from node iruya-node pod pod-configmaps-36199360-b44a-4487-9a4f-54bb2af0c5a8 container configmap-volume-test: 
STEP: delete the pod
Jan 23 14:59:10.895: INFO: Waiting for pod pod-configmaps-36199360-b44a-4487-9a4f-54bb2af0c5a8 to disappear
Jan 23 14:59:10.916: INFO: Pod pod-configmaps-36199360-b44a-4487-9a4f-54bb2af0c5a8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 14:59:10.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4313" for this suite.
Jan 23 14:59:16.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 14:59:17.121: INFO: namespace configmap-4313 deletion completed in 6.195816624s

• [SLOW TEST:14.635 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 14:59:17.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 23 14:59:17.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8956'
Jan 23 14:59:17.460: INFO: stderr: ""
Jan 23 14:59:17.460: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 23 14:59:17.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8956'
Jan 23 14:59:17.661: INFO: stderr: ""
Jan 23 14:59:17.661: INFO: stdout: "update-demo-nautilus-7sqkt update-demo-nautilus-9hrjf "
Jan 23 14:59:17.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:17.743: INFO: stderr: ""
Jan 23 14:59:17.743: INFO: stdout: ""
Jan 23 14:59:17.743: INFO: update-demo-nautilus-7sqkt is created but not running
Jan 23 14:59:22.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8956'
Jan 23 14:59:23.910: INFO: stderr: ""
Jan 23 14:59:23.910: INFO: stdout: "update-demo-nautilus-7sqkt update-demo-nautilus-9hrjf "
Jan 23 14:59:23.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:24.655: INFO: stderr: ""
Jan 23 14:59:24.655: INFO: stdout: ""
Jan 23 14:59:24.655: INFO: update-demo-nautilus-7sqkt is created but not running
Jan 23 14:59:29.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8956'
Jan 23 14:59:29.804: INFO: stderr: ""
Jan 23 14:59:29.805: INFO: stdout: "update-demo-nautilus-7sqkt update-demo-nautilus-9hrjf "
Jan 23 14:59:29.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:29.971: INFO: stderr: ""
Jan 23 14:59:29.971: INFO: stdout: "true"
Jan 23 14:59:29.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:30.078: INFO: stderr: ""
Jan 23 14:59:30.078: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 14:59:30.078: INFO: validating pod update-demo-nautilus-7sqkt
Jan 23 14:59:30.095: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 14:59:30.095: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 14:59:30.095: INFO: update-demo-nautilus-7sqkt is verified up and running
Jan 23 14:59:30.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9hrjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:30.163: INFO: stderr: ""
Jan 23 14:59:30.163: INFO: stdout: "true"
Jan 23 14:59:30.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9hrjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:30.242: INFO: stderr: ""
Jan 23 14:59:30.242: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 14:59:30.242: INFO: validating pod update-demo-nautilus-9hrjf
Jan 23 14:59:30.271: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 14:59:30.272: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 14:59:30.272: INFO: update-demo-nautilus-9hrjf is verified up and running
STEP: scaling down the replication controller
Jan 23 14:59:30.273: INFO: scanned /root for discovery docs: 
Jan 23 14:59:30.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8956'
Jan 23 14:59:31.482: INFO: stderr: ""
Jan 23 14:59:31.482: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 23 14:59:31.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8956'
Jan 23 14:59:31.607: INFO: stderr: ""
Jan 23 14:59:31.608: INFO: stdout: "update-demo-nautilus-7sqkt update-demo-nautilus-9hrjf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 23 14:59:36.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8956'
Jan 23 14:59:36.741: INFO: stderr: ""
Jan 23 14:59:36.741: INFO: stdout: "update-demo-nautilus-7sqkt update-demo-nautilus-9hrjf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 23 14:59:41.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8956'
Jan 23 14:59:41.903: INFO: stderr: ""
Jan 23 14:59:41.904: INFO: stdout: "update-demo-nautilus-7sqkt update-demo-nautilus-9hrjf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 23 14:59:46.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8956'
Jan 23 14:59:47.034: INFO: stderr: ""
Jan 23 14:59:47.039: INFO: stdout: "update-demo-nautilus-7sqkt "
Jan 23 14:59:47.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:47.136: INFO: stderr: ""
Jan 23 14:59:47.136: INFO: stdout: "true"
Jan 23 14:59:47.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:47.287: INFO: stderr: ""
Jan 23 14:59:47.287: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 14:59:47.287: INFO: validating pod update-demo-nautilus-7sqkt
Jan 23 14:59:47.294: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 14:59:47.294: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 14:59:47.295: INFO: update-demo-nautilus-7sqkt is verified up and running
STEP: scaling up the replication controller
Jan 23 14:59:47.297: INFO: scanned /root for discovery docs: 
Jan 23 14:59:47.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8956'
Jan 23 14:59:48.527: INFO: stderr: ""
Jan 23 14:59:48.528: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 23 14:59:48.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8956'
Jan 23 14:59:48.641: INFO: stderr: ""
Jan 23 14:59:48.641: INFO: stdout: "update-demo-nautilus-7sqkt update-demo-nautilus-wfwjf "
Jan 23 14:59:48.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:48.752: INFO: stderr: ""
Jan 23 14:59:48.752: INFO: stdout: "true"
Jan 23 14:59:48.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:48.859: INFO: stderr: ""
Jan 23 14:59:48.859: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 14:59:48.859: INFO: validating pod update-demo-nautilus-7sqkt
Jan 23 14:59:48.872: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 14:59:48.872: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 14:59:48.872: INFO: update-demo-nautilus-7sqkt is verified up and running
Jan 23 14:59:48.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wfwjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:49.019: INFO: stderr: ""
Jan 23 14:59:49.019: INFO: stdout: ""
Jan 23 14:59:49.019: INFO: update-demo-nautilus-wfwjf is created but not running
Jan 23 14:59:54.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8956'
Jan 23 14:59:54.176: INFO: stderr: ""
Jan 23 14:59:54.177: INFO: stdout: "update-demo-nautilus-7sqkt update-demo-nautilus-wfwjf "
Jan 23 14:59:54.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:54.338: INFO: stderr: ""
Jan 23 14:59:54.338: INFO: stdout: "true"
Jan 23 14:59:54.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:54.426: INFO: stderr: ""
Jan 23 14:59:54.426: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 14:59:54.426: INFO: validating pod update-demo-nautilus-7sqkt
Jan 23 14:59:54.430: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 14:59:54.430: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 14:59:54.431: INFO: update-demo-nautilus-7sqkt is verified up and running
Jan 23 14:59:54.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wfwjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:54.511: INFO: stderr: ""
Jan 23 14:59:54.511: INFO: stdout: ""
Jan 23 14:59:54.511: INFO: update-demo-nautilus-wfwjf is created but not running
Jan 23 14:59:59.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8956'
Jan 23 14:59:59.686: INFO: stderr: ""
Jan 23 14:59:59.686: INFO: stdout: "update-demo-nautilus-7sqkt update-demo-nautilus-wfwjf "
Jan 23 14:59:59.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:59.765: INFO: stderr: ""
Jan 23 14:59:59.765: INFO: stdout: "true"
Jan 23 14:59:59.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:59.915: INFO: stderr: ""
Jan 23 14:59:59.915: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 14:59:59.915: INFO: validating pod update-demo-nautilus-7sqkt
Jan 23 14:59:59.924: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 14:59:59.924: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 14:59:59.924: INFO: update-demo-nautilus-7sqkt is verified up and running
Jan 23 14:59:59.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wfwjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 14:59:59.995: INFO: stderr: ""
Jan 23 14:59:59.995: INFO: stdout: "true"
Jan 23 14:59:59.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wfwjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8956'
Jan 23 15:00:00.063: INFO: stderr: ""
Jan 23 15:00:00.063: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 23 15:00:00.063: INFO: validating pod update-demo-nautilus-wfwjf
Jan 23 15:00:00.079: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 23 15:00:00.079: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 23 15:00:00.079: INFO: update-demo-nautilus-wfwjf is verified up and running
STEP: using delete to clean up resources
Jan 23 15:00:00.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8956'
Jan 23 15:00:00.178: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 23 15:00:00.178: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 23 15:00:00.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8956'
Jan 23 15:00:00.323: INFO: stderr: "No resources found.\n"
Jan 23 15:00:00.323: INFO: stdout: ""
Jan 23 15:00:00.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8956 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 23 15:00:00.538: INFO: stderr: ""
Jan 23 15:00:00.539: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:00:00.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8956" for this suite.
Jan 23 15:00:24.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:00:24.692: INFO: namespace kubectl-8956 deletion completed in 24.143473866s

• [SLOW TEST:67.571 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:00:24.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 23 15:00:24.765: INFO: Waiting up to 5m0s for pod "pod-61140937-b9ef-446e-bcef-a33448eee691" in namespace "emptydir-4599" to be "success or failure"
Jan 23 15:00:24.855: INFO: Pod "pod-61140937-b9ef-446e-bcef-a33448eee691": Phase="Pending", Reason="", readiness=false. Elapsed: 90.224923ms
Jan 23 15:00:26.892: INFO: Pod "pod-61140937-b9ef-446e-bcef-a33448eee691": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126847559s
Jan 23 15:00:28.911: INFO: Pod "pod-61140937-b9ef-446e-bcef-a33448eee691": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145879521s
Jan 23 15:00:30.920: INFO: Pod "pod-61140937-b9ef-446e-bcef-a33448eee691": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154560473s
Jan 23 15:00:32.928: INFO: Pod "pod-61140937-b9ef-446e-bcef-a33448eee691": Phase="Running", Reason="", readiness=true. Elapsed: 8.162605067s
Jan 23 15:00:34.935: INFO: Pod "pod-61140937-b9ef-446e-bcef-a33448eee691": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169691915s
STEP: Saw pod success
Jan 23 15:00:34.935: INFO: Pod "pod-61140937-b9ef-446e-bcef-a33448eee691" satisfied condition "success or failure"
Jan 23 15:00:34.938: INFO: Trying to get logs from node iruya-node pod pod-61140937-b9ef-446e-bcef-a33448eee691 container test-container: 
STEP: delete the pod
Jan 23 15:00:35.005: INFO: Waiting for pod pod-61140937-b9ef-446e-bcef-a33448eee691 to disappear
Jan 23 15:00:35.051: INFO: Pod pod-61140937-b9ef-446e-bcef-a33448eee691 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:00:35.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4599" for this suite.
Jan 23 15:00:41.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:00:41.287: INFO: namespace emptydir-4599 deletion completed in 6.184673326s

• [SLOW TEST:16.594 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:00:41.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 23 15:00:41.495: INFO: Waiting up to 5m0s for pod "downward-api-4b4070dc-f45b-442f-8a4c-5338304cc46e" in namespace "downward-api-9734" to be "success or failure"
Jan 23 15:00:41.553: INFO: Pod "downward-api-4b4070dc-f45b-442f-8a4c-5338304cc46e": Phase="Pending", Reason="", readiness=false. Elapsed: 58.297903ms
Jan 23 15:00:43.564: INFO: Pod "downward-api-4b4070dc-f45b-442f-8a4c-5338304cc46e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069024799s
Jan 23 15:00:45.574: INFO: Pod "downward-api-4b4070dc-f45b-442f-8a4c-5338304cc46e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079400447s
Jan 23 15:00:47.593: INFO: Pod "downward-api-4b4070dc-f45b-442f-8a4c-5338304cc46e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098100012s
Jan 23 15:00:49.604: INFO: Pod "downward-api-4b4070dc-f45b-442f-8a4c-5338304cc46e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109283912s
Jan 23 15:00:51.612: INFO: Pod "downward-api-4b4070dc-f45b-442f-8a4c-5338304cc46e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117330959s
STEP: Saw pod success
Jan 23 15:00:51.612: INFO: Pod "downward-api-4b4070dc-f45b-442f-8a4c-5338304cc46e" satisfied condition "success or failure"
Jan 23 15:00:51.618: INFO: Trying to get logs from node iruya-node pod downward-api-4b4070dc-f45b-442f-8a4c-5338304cc46e container dapi-container: 
STEP: delete the pod
Jan 23 15:00:51.719: INFO: Waiting for pod downward-api-4b4070dc-f45b-442f-8a4c-5338304cc46e to disappear
Jan 23 15:00:51.746: INFO: Pod downward-api-4b4070dc-f45b-442f-8a4c-5338304cc46e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:00:51.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9734" for this suite.
Jan 23 15:00:57.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:00:57.913: INFO: namespace downward-api-9734 deletion completed in 6.159616147s

• [SLOW TEST:16.625 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:00:57.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jan 23 15:00:58.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3106 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 23 15:01:08.512: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0123 15:01:06.951966    3926 log.go:172] (0xc00061eb00) (0xc000474aa0) Create stream\nI0123 15:01:06.952009    3926 log.go:172] (0xc00061eb00) (0xc000474aa0) Stream added, broadcasting: 1\nI0123 15:01:06.958327    3926 log.go:172] (0xc00061eb00) Reply frame received for 1\nI0123 15:01:06.958365    3926 log.go:172] (0xc00061eb00) (0xc00088e000) Create stream\nI0123 15:01:06.958373    3926 log.go:172] (0xc00061eb00) (0xc00088e000) Stream added, broadcasting: 3\nI0123 15:01:06.960178    3926 log.go:172] (0xc00061eb00) Reply frame received for 3\nI0123 15:01:06.960201    3926 log.go:172] (0xc00061eb00) (0xc00088e0a0) Create stream\nI0123 15:01:06.960207    3926 log.go:172] (0xc00061eb00) (0xc00088e0a0) Stream added, broadcasting: 5\nI0123 15:01:06.961595    3926 log.go:172] (0xc00061eb00) Reply frame received for 5\nI0123 15:01:06.961612    3926 log.go:172] (0xc00061eb00) (0xc000474b40) Create stream\nI0123 15:01:06.961616    3926 log.go:172] (0xc00061eb00) (0xc000474b40) Stream added, broadcasting: 7\nI0123 15:01:06.962836    3926 log.go:172] (0xc00061eb00) Reply frame received for 7\nI0123 15:01:06.962985    3926 log.go:172] (0xc00088e000) (3) Writing data frame\nI0123 15:01:06.963106    3926 log.go:172] (0xc00088e000) (3) Writing data frame\nI0123 15:01:06.971350    3926 log.go:172] (0xc00061eb00) Data frame received for 5\nI0123 15:01:06.971443    3926 log.go:172] (0xc00088e0a0) (5) Data frame handling\nI0123 15:01:06.971486    3926 log.go:172] (0xc00088e0a0) (5) Data frame sent\nI0123 15:01:06.976147    3926 log.go:172] (0xc00061eb00) Data frame received for 5\nI0123 15:01:06.976158    3926 log.go:172] (0xc00088e0a0) (5) Data frame handling\nI0123 15:01:06.976169    3926 log.go:172] (0xc00088e0a0) (5) Data frame sent\nI0123 15:01:08.477764    3926 log.go:172] (0xc00061eb00) (0xc00088e000) Stream removed, broadcasting: 3\nI0123 15:01:08.477857    3926 log.go:172] (0xc00061eb00) Data frame received for 1\nI0123 15:01:08.477871    3926 log.go:172] (0xc000474aa0) (1) Data frame handling\nI0123 15:01:08.477876    3926 log.go:172] (0xc000474aa0) (1) Data frame sent\nI0123 15:01:08.477882    3926 log.go:172] (0xc00061eb00) (0xc000474aa0) Stream removed, broadcasting: 1\nI0123 15:01:08.477980    3926 log.go:172] (0xc00061eb00) (0xc000474b40) Stream removed, broadcasting: 7\nI0123 15:01:08.478104    3926 log.go:172] (0xc00061eb00) (0xc00088e0a0) Stream removed, broadcasting: 5\nI0123 15:01:08.478126    3926 log.go:172] (0xc00061eb00) Go away received\nI0123 15:01:08.478472    3926 log.go:172] (0xc00061eb00) (0xc000474aa0) Stream removed, broadcasting: 1\nI0123 15:01:08.478527    3926 log.go:172] (0xc00061eb00) (0xc00088e000) Stream removed, broadcasting: 3\nI0123 15:01:08.478535    3926 log.go:172] (0xc00061eb00) (0xc00088e0a0) Stream removed, broadcasting: 5\nI0123 15:01:08.478541    3926 log.go:172] (0xc00061eb00) (0xc000474b40) Stream removed, broadcasting: 7\n"
Jan 23 15:01:08.512: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:01:10.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3106" for this suite.
Jan 23 15:01:16.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:01:16.690: INFO: namespace kubectl-3106 deletion completed in 6.150288187s

• [SLOW TEST:18.777 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:01:16.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 23 15:01:16.802: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:01:30.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1522" for this suite.
Jan 23 15:01:36.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:01:36.666: INFO: namespace init-container-1522 deletion completed in 6.220873769s

• [SLOW TEST:19.976 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:01:36.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 23 15:01:36.743: INFO: Waiting up to 5m0s for pod "pod-590eac13-c7b3-4bee-ad6b-5b20a357dd0c" in namespace "emptydir-1420" to be "success or failure"
Jan 23 15:01:36.808: INFO: Pod "pod-590eac13-c7b3-4bee-ad6b-5b20a357dd0c": Phase="Pending", Reason="", readiness=false. Elapsed: 64.842072ms
Jan 23 15:01:38.837: INFO: Pod "pod-590eac13-c7b3-4bee-ad6b-5b20a357dd0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093937225s
Jan 23 15:01:40.854: INFO: Pod "pod-590eac13-c7b3-4bee-ad6b-5b20a357dd0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111544204s
Jan 23 15:01:42.868: INFO: Pod "pod-590eac13-c7b3-4bee-ad6b-5b20a357dd0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125081154s
Jan 23 15:01:44.878: INFO: Pod "pod-590eac13-c7b3-4bee-ad6b-5b20a357dd0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.135398192s
STEP: Saw pod success
Jan 23 15:01:44.879: INFO: Pod "pod-590eac13-c7b3-4bee-ad6b-5b20a357dd0c" satisfied condition "success or failure"
Jan 23 15:01:44.891: INFO: Trying to get logs from node iruya-node pod pod-590eac13-c7b3-4bee-ad6b-5b20a357dd0c container test-container: 
STEP: delete the pod
Jan 23 15:01:44.973: INFO: Waiting for pod pod-590eac13-c7b3-4bee-ad6b-5b20a357dd0c to disappear
Jan 23 15:01:44.982: INFO: Pod pod-590eac13-c7b3-4bee-ad6b-5b20a357dd0c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:01:44.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1420" for this suite.
Jan 23 15:01:51.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:01:51.241: INFO: namespace emptydir-1420 deletion completed in 6.251818056s

• [SLOW TEST:14.575 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:01:51.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 23 15:02:01.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-7f4c95c8-4ab7-4f56-98bd-8582b64ef684 -c busybox-main-container --namespace=emptydir-8302 -- cat /usr/share/volumeshare/shareddata.txt'
Jan 23 15:02:02.073: INFO: stderr: "I0123 15:02:01.707000    3949 log.go:172] (0xc000a78630) (0xc000510be0) Create stream\nI0123 15:02:01.707202    3949 log.go:172] (0xc000a78630) (0xc000510be0) Stream added, broadcasting: 1\nI0123 15:02:01.722490    3949 log.go:172] (0xc000a78630) Reply frame received for 1\nI0123 15:02:01.722543    3949 log.go:172] (0xc000a78630) (0xc000510000) Create stream\nI0123 15:02:01.722578    3949 log.go:172] (0xc000a78630) (0xc000510000) Stream added, broadcasting: 3\nI0123 15:02:01.724779    3949 log.go:172] (0xc000a78630) Reply frame received for 3\nI0123 15:02:01.724819    3949 log.go:172] (0xc000a78630) (0xc00029e140) Create stream\nI0123 15:02:01.724834    3949 log.go:172] (0xc000a78630) (0xc00029e140) Stream added, broadcasting: 5\nI0123 15:02:01.726453    3949 log.go:172] (0xc000a78630) Reply frame received for 5\nI0123 15:02:01.888425    3949 log.go:172] (0xc000a78630) Data frame received for 3\nI0123 15:02:01.888516    3949 log.go:172] (0xc000510000) (3) Data frame handling\nI0123 15:02:01.888552    3949 log.go:172] (0xc000510000) (3) Data frame sent\nI0123 15:02:02.062504    3949 log.go:172] (0xc000a78630) Data frame received for 1\nI0123 15:02:02.063002    3949 log.go:172] (0xc000a78630) (0xc000510000) Stream removed, broadcasting: 3\nI0123 15:02:02.063174    3949 log.go:172] (0xc000a78630) (0xc00029e140) Stream removed, broadcasting: 5\nI0123 15:02:02.063291    3949 log.go:172] (0xc000510be0) (1) Data frame handling\nI0123 15:02:02.063351    3949 log.go:172] (0xc000510be0) (1) Data frame sent\nI0123 15:02:02.063372    3949 log.go:172] (0xc000a78630) (0xc000510be0) Stream removed, broadcasting: 1\nI0123 15:02:02.063407    3949 log.go:172] (0xc000a78630) Go away received\nI0123 15:02:02.064150    3949 log.go:172] (0xc000a78630) (0xc000510be0) Stream removed, broadcasting: 1\nI0123 15:02:02.064236    3949 log.go:172] (0xc000a78630) (0xc000510000) Stream removed, broadcasting: 3\nI0123 15:02:02.064261    3949 log.go:172] (0xc000a78630) (0xc00029e140) Stream removed, broadcasting: 5\n"
Jan 23 15:02:02.074: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:02:02.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8302" for this suite.
Jan 23 15:02:08.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:02:08.256: INFO: namespace emptydir-8302 deletion completed in 6.17038537s

• [SLOW TEST:17.014 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:02:08.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-7b10067e-2266-447b-8588-61663e3059a2
STEP: Creating a pod to test consume secrets
Jan 23 15:02:08.448: INFO: Waiting up to 5m0s for pod "pod-secrets-4a5aa04e-aed3-483c-a9e9-00a13b4ad99f" in namespace "secrets-21" to be "success or failure"
Jan 23 15:02:08.451: INFO: Pod "pod-secrets-4a5aa04e-aed3-483c-a9e9-00a13b4ad99f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.253251ms
Jan 23 15:02:10.475: INFO: Pod "pod-secrets-4a5aa04e-aed3-483c-a9e9-00a13b4ad99f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026504771s
Jan 23 15:02:12.494: INFO: Pod "pod-secrets-4a5aa04e-aed3-483c-a9e9-00a13b4ad99f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0460534s
Jan 23 15:02:14.513: INFO: Pod "pod-secrets-4a5aa04e-aed3-483c-a9e9-00a13b4ad99f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064789287s
Jan 23 15:02:16.532: INFO: Pod "pod-secrets-4a5aa04e-aed3-483c-a9e9-00a13b4ad99f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0840579s
Jan 23 15:02:18.544: INFO: Pod "pod-secrets-4a5aa04e-aed3-483c-a9e9-00a13b4ad99f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095520231s
STEP: Saw pod success
Jan 23 15:02:18.544: INFO: Pod "pod-secrets-4a5aa04e-aed3-483c-a9e9-00a13b4ad99f" satisfied condition "success or failure"
Jan 23 15:02:18.548: INFO: Trying to get logs from node iruya-node pod pod-secrets-4a5aa04e-aed3-483c-a9e9-00a13b4ad99f container secret-volume-test: 
STEP: delete the pod
Jan 23 15:02:18.619: INFO: Waiting for pod pod-secrets-4a5aa04e-aed3-483c-a9e9-00a13b4ad99f to disappear
Jan 23 15:02:18.635: INFO: Pod pod-secrets-4a5aa04e-aed3-483c-a9e9-00a13b4ad99f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:02:18.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-21" for this suite.
Jan 23 15:02:24.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:02:24.771: INFO: namespace secrets-21 deletion completed in 6.128867192s

• [SLOW TEST:16.513 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:02:24.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 23 15:02:24.865: INFO: Waiting up to 5m0s for pod "downward-api-3ad05c68-dc1f-4195-b94e-af22ff5bba7e" in namespace "downward-api-5715" to be "success or failure"
Jan 23 15:02:24.870: INFO: Pod "downward-api-3ad05c68-dc1f-4195-b94e-af22ff5bba7e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.057057ms
Jan 23 15:02:26.880: INFO: Pod "downward-api-3ad05c68-dc1f-4195-b94e-af22ff5bba7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014721898s
Jan 23 15:02:28.891: INFO: Pod "downward-api-3ad05c68-dc1f-4195-b94e-af22ff5bba7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026128601s
Jan 23 15:02:30.906: INFO: Pod "downward-api-3ad05c68-dc1f-4195-b94e-af22ff5bba7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040730696s
Jan 23 15:02:32.922: INFO: Pod "downward-api-3ad05c68-dc1f-4195-b94e-af22ff5bba7e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056725925s
Jan 23 15:02:34.930: INFO: Pod "downward-api-3ad05c68-dc1f-4195-b94e-af22ff5bba7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065033284s
STEP: Saw pod success
Jan 23 15:02:34.930: INFO: Pod "downward-api-3ad05c68-dc1f-4195-b94e-af22ff5bba7e" satisfied condition "success or failure"
Jan 23 15:02:34.935: INFO: Trying to get logs from node iruya-node pod downward-api-3ad05c68-dc1f-4195-b94e-af22ff5bba7e container dapi-container: 
STEP: delete the pod
Jan 23 15:02:35.001: INFO: Waiting for pod downward-api-3ad05c68-dc1f-4195-b94e-af22ff5bba7e to disappear
Jan 23 15:02:35.013: INFO: Pod downward-api-3ad05c68-dc1f-4195-b94e-af22ff5bba7e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:02:35.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5715" for this suite.
Jan 23 15:02:41.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:02:41.163: INFO: namespace downward-api-5715 deletion completed in 6.145197618s

• [SLOW TEST:16.393 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:02:41.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 15:02:41.228: INFO: Creating deployment "nginx-deployment"
Jan 23 15:02:41.233: INFO: Waiting for observed generation 1
Jan 23 15:02:44.402: INFO: Waiting for all required pods to come up
Jan 23 15:02:44.873: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 23 15:03:11.384: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 23 15:03:11.399: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 23 15:03:11.410: INFO: Updating deployment nginx-deployment
Jan 23 15:03:11.410: INFO: Waiting for observed generation 2
Jan 23 15:03:14.137: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 23 15:03:15.462: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 23 15:03:15.471: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 23 15:03:16.791: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 23 15:03:16.791: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 23 15:03:16.796: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 23 15:03:17.335: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 23 15:03:17.335: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 23 15:03:17.363: INFO: Updating deployment nginx-deployment
Jan 23 15:03:17.363: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 23 15:03:17.846: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 23 15:03:18.052: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 23 15:03:18.945: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-9092,SelfLink:/apis/apps/v1/namespaces/deployment-9092/deployments/nginx-deployment,UID:42753287-2666-436f-98e2-b9934536a375,ResourceVersion:21575418,Generation:3,CreationTimestamp:2020-01-23 15:02:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-23 15:03:11 +0000 UTC 2020-01-23 15:02:41 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-01-23 15:03:17 +0000 UTC 2020-01-23 15:03:17 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 23 15:03:19.830: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-9092,SelfLink:/apis/apps/v1/namespaces/deployment-9092/replicasets/nginx-deployment-55fb7cb77f,UID:aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2,ResourceVersion:21575449,Generation:3,CreationTimestamp:2020-01-23 15:03:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 42753287-2666-436f-98e2-b9934536a375 0xc000dc5417 0xc000dc5418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 23 15:03:19.830: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 23 15:03:19.831: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-9092,SelfLink:/apis/apps/v1/namespaces/deployment-9092/replicasets/nginx-deployment-7b8c6f4498,UID:aeaf4d09-3504-48cc-91db-64b626997ae0,ResourceVersion:21575426,Generation:3,CreationTimestamp:2020-01-23 15:02:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 42753287-2666-436f-98e2-b9934536a375 0xc000dc54e7 0xc000dc54e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 23 15:03:22.390: INFO: Pod "nginx-deployment-55fb7cb77f-5v8wh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5v8wh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-5v8wh,UID:1782409b-dc13-4fe8-91f9-e95b94f0b457,ResourceVersion:21575446,Generation:0,CreationTimestamp:2020-01-23 15:03:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec6627 0xc001ec6628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec6690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec66b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.390: INFO: Pod "nginx-deployment-55fb7cb77f-7cm2q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7cm2q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-7cm2q,UID:1b5ea731-3a91-45ef-bb02-2f732838bdd0,ResourceVersion:21575445,Generation:0,CreationTimestamp:2020-01-23 15:03:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec6737 0xc001ec6738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec67b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec67d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.390: INFO: Pod "nginx-deployment-55fb7cb77f-brrwj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-brrwj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-brrwj,UID:d6ea6b09-f8bd-4b15-881f-2b1731de9f4a,ResourceVersion:21575444,Generation:0,CreationTimestamp:2020-01-23 15:03:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec6857 0xc001ec6858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec68d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec68f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.390: INFO: Pod "nginx-deployment-55fb7cb77f-c9sf8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c9sf8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-c9sf8,UID:ec205182-c325-4b50-b180-726842ab9f19,ResourceVersion:21575388,Generation:0,CreationTimestamp:2020-01-23 15:03:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec6977 0xc001ec6978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec69f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec6a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-23 15:03:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.390: INFO: Pod "nginx-deployment-55fb7cb77f-gln2c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gln2c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-gln2c,UID:c28d1925-259b-4d73-99bd-f8b1b06a7290,ResourceVersion:21575361,Generation:0,CreationTimestamp:2020-01-23 15:03:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec6ae7 0xc001ec6ae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec6b50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec6b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-23 15:03:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.391: INFO: Pod "nginx-deployment-55fb7cb77f-gsswl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gsswl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-gsswl,UID:7d146d55-9f62-4d06-8875-5c4863d9e935,ResourceVersion:21575386,Generation:0,CreationTimestamp:2020-01-23 15:03:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec6c47 0xc001ec6c48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec6cb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec6cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-23 15:03:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.391: INFO: Pod "nginx-deployment-55fb7cb77f-ldr9n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ldr9n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-ldr9n,UID:065d4b33-469a-4ea3-be06-ccaf731e3983,ResourceVersion:21575422,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec6da7 0xc001ec6da8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec6e10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec6e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.391: INFO: Pod "nginx-deployment-55fb7cb77f-lplvv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lplvv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-lplvv,UID:0d09ad99-685c-4b70-b8c8-0809311b7c01,ResourceVersion:21575384,Generation:0,CreationTimestamp:2020-01-23 15:03:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec6ec7 0xc001ec6ec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec6f40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec6f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-23 15:03:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.391: INFO: Pod "nginx-deployment-55fb7cb77f-q8l6z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q8l6z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-q8l6z,UID:db16abf2-e274-428f-81bf-7f0e290be6c3,ResourceVersion:21575440,Generation:0,CreationTimestamp:2020-01-23 15:03:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec7037 0xc001ec7038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec70a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec70c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.391: INFO: Pod "nginx-deployment-55fb7cb77f-r796h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r796h,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-r796h,UID:9f9b7fcd-b3a4-4f39-bc0b-1dd985d55b70,ResourceVersion:21575427,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec7147 0xc001ec7148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec71c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec71e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.392: INFO: Pod "nginx-deployment-55fb7cb77f-x5djk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-x5djk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-x5djk,UID:6b887613-80c1-4f60-8d03-7ac621894488,ResourceVersion:21575380,Generation:0,CreationTimestamp:2020-01-23 15:03:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec7277 0xc001ec7278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec72f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec7310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-23 15:03:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.392: INFO: Pod "nginx-deployment-55fb7cb77f-xz2qq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xz2qq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-xz2qq,UID:00b35747-f42d-488d-9950-30e00169affc,ResourceVersion:21575443,Generation:0,CreationTimestamp:2020-01-23 15:03:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec73f7 0xc001ec73f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec7460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec7480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.392: INFO: Pod "nginx-deployment-55fb7cb77f-zxdmz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zxdmz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-55fb7cb77f-zxdmz,UID:6393a967-87ad-4a43-83aa-209fefe44800,ResourceVersion:21575432,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f aa64cd4b-a6ab-4f81-a16f-d5b2e2a3f3c2 0xc001ec7507 0xc001ec7508}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec7580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec75a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.392: INFO: Pod "nginx-deployment-7b8c6f4498-4lt4v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4lt4v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-4lt4v,UID:c6b51f05-42b4-40fc-8885-34b979c9e98d,ResourceVersion:21575434,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc001ec7627 0xc001ec7628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec76a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec76c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.393: INFO: Pod "nginx-deployment-7b8c6f4498-5frcz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5frcz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-5frcz,UID:5ccaa8b9-e9da-4bf1-ad05-7223e41a73d5,ResourceVersion:21575285,Generation:0,CreationTimestamp:2020-01-23 15:02:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc001ec7747 0xc001ec7748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec77b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec77d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-23 15:02:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 15:03:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://074975f15f2c4ae1c7b85da5cc869f9ab143ad01cc46446d1a0a4a804d8a3c62}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.393: INFO: Pod "nginx-deployment-7b8c6f4498-5qwhh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5qwhh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-5qwhh,UID:e2f27f89-c6ac-4994-b10a-a4301fe8b0ae,ResourceVersion:21575454,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc001ec78a7 0xc001ec78a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec7910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec7930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-23 15:03:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.393: INFO: Pod "nginx-deployment-7b8c6f4498-5xsrn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5xsrn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-5xsrn,UID:6dfae44b-1c80-454c-ae1d-dbb5e1a40fa4,ResourceVersion:21575451,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc001ec79f7 0xc001ec79f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec7a70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec7a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-23 15:03:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.394: INFO: Pod "nginx-deployment-7b8c6f4498-6rswx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6rswx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-6rswx,UID:8815459d-995f-42b8-bf0e-a3ab4e676f15,ResourceVersion:21575442,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc001ec7b57 0xc001ec7b58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec7bc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec7be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-23 15:03:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.394: INFO: Pod "nginx-deployment-7b8c6f4498-7qjrl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7qjrl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-7qjrl,UID:37fc1142-bc12-4063-acec-9ed09f824305,ResourceVersion:21575283,Generation:0,CreationTimestamp:2020-01-23 15:02:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc001ec7ca7 0xc001ec7ca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec7d10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec7d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-23 15:02:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 15:03:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://51fa86c71fb6c2bea6d488b58e54f54d6622d8dccf9d69cb17fafc34da61a150}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.394: INFO: Pod "nginx-deployment-7b8c6f4498-94pmq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-94pmq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-94pmq,UID:571ea0c3-0c5f-4b7c-aff1-c9cb1dcba5b5,ResourceVersion:21575323,Generation:0,CreationTimestamp:2020-01-23 15:02:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc001ec7e07 0xc001ec7e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec7e90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec7eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2020-01-23 15:02:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 15:03:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a07be502f8025544d9a867e58ce4f9bbb048acff5d2a28550617e5990d5b5f8d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.394: INFO: Pod "nginx-deployment-7b8c6f4498-9cxhr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9cxhr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-9cxhr,UID:8ef54e06-9530-4efa-accf-cf9fe1f3b95f,ResourceVersion:21575294,Generation:0,CreationTimestamp:2020-01-23 15:02:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc001ec7f97 0xc001ec7f98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0005f8c10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0005f8cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-23 15:02:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 15:03:02 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d66c4768d73e8767f0ec047dff2701a61c84336045f365e8e068db9edce1a0da}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.394: INFO: Pod "nginx-deployment-7b8c6f4498-9hrqb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9hrqb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-9hrqb,UID:3dc2a4ce-0b5f-431a-8725-8dac9dbb569f,ResourceVersion:21575410,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc0005f8ec7 0xc0005f8ec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0005f8f40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0005f8f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.395: INFO: Pod "nginx-deployment-7b8c6f4498-cp94v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cp94v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-cp94v,UID:330fe117-20cc-4049-a015-b365d115ef97,ResourceVersion:21575421,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc0005f9037 0xc0005f9038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0005f9120} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0005f9150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.395: INFO: Pod "nginx-deployment-7b8c6f4498-cp9bh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cp9bh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-cp9bh,UID:e202c679-dbc8-4985-a0f4-84a49952d9ad,ResourceVersion:21575438,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc0005f9537 0xc0005f9538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0005f9680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0005f96f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.395: INFO: Pod "nginx-deployment-7b8c6f4498-hmrp7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hmrp7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-hmrp7,UID:fcbf33e5-70c5-45ad-90ff-dee3c4e13566,ResourceVersion:21575290,Generation:0,CreationTimestamp:2020-01-23 15:02:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc0005f97d7 0xc0005f97d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0005f9880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0005f98d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-23 15:02:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 15:03:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f0de0f51246e40873f38ffc7d7f18d508772dd52b08991cd6914e934638ebad8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.396: INFO: Pod "nginx-deployment-7b8c6f4498-lz9q4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lz9q4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-lz9q4,UID:e4d940a0-344c-410d-b78b-af2ab0a57e36,ResourceVersion:21575425,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc0005f9b07 0xc0005f9b08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0005f9c50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0005f9c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.396: INFO: Pod "nginx-deployment-7b8c6f4498-n7jhv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n7jhv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-n7jhv,UID:3a99e46b-78bb-41a9-8d93-f7f444a31679,ResourceVersion:21575433,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc002684037 0xc002684038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026840b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026840d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.396: INFO: Pod "nginx-deployment-7b8c6f4498-rnhgj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rnhgj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-rnhgj,UID:56226b8a-ba6f-44aa-87c6-bfe61090c8c4,ResourceVersion:21575431,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc002684167 0xc002684168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026841d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026841f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.396: INFO: Pod "nginx-deployment-7b8c6f4498-sw7bj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sw7bj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-sw7bj,UID:0e25223b-0eb9-4ea0-a8e1-c8aa7074c007,ResourceVersion:21575429,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc002684277 0xc002684278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026842e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002684300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.397: INFO: Pod "nginx-deployment-7b8c6f4498-tc4sl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tc4sl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-tc4sl,UID:08e39d96-e67d-4539-86ed-9ce18bc982f4,ResourceVersion:21575315,Generation:0,CreationTimestamp:2020-01-23 15:02:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc0026843a7 0xc0026843a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002684420} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002684440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-23 15:02:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 15:03:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f05bb749f0d27fd5cff87861c02a6de0b2c99d2733f0a640a8afef21e53e17bd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.397: INFO: Pod "nginx-deployment-7b8c6f4498-v4px6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v4px6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-v4px6,UID:d2b99262-66b3-41fd-9c74-3de448c094f2,ResourceVersion:21575329,Generation:0,CreationTimestamp:2020-01-23 15:02:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc002684517 0xc002684518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002684590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026845b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-23 15:02:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 15:03:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c1f412e52d38e751ab11e054e629be1f8c4929ef9fa1b473e21cf1c6091e1835}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.397: INFO: Pod "nginx-deployment-7b8c6f4498-v8vn8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v8vn8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-v8vn8,UID:d9638813-d979-4a5c-9cfb-9b46024bb6cd,ResourceVersion:21575423,Generation:0,CreationTimestamp:2020-01-23 15:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc002684687 0xc002684688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026846f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002684710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 23 15:03:22.397: INFO: Pod "nginx-deployment-7b8c6f4498-vclvp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vclvp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9092,SelfLink:/api/v1/namespaces/deployment-9092/pods/nginx-deployment-7b8c6f4498-vclvp,UID:17cab48c-3931-4d01-90ac-85d61e09c18d,ResourceVersion:21575318,Generation:0,CreationTimestamp:2020-01-23 15:02:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 aeaf4d09-3504-48cc-91db-64b626997ae0 0xc002684797 0xc002684798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qpdsd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpdsd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qpdsd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002684810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002684830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:03:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 15:02:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-23 15:02:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-23 15:03:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f49ccec47a069b3e2bfdaa576341c2913aabe22c8343c5bd947fed3ece064d44}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:03:22.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9092" for this suite.
Jan 23 15:04:13.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:04:13.265: INFO: namespace deployment-9092 deletion completed in 48.117883214s

• [SLOW TEST:92.102 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:04:13.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan 23 15:04:23.456: INFO: Pod pod-hostip-4c7a29c9-1fc5-4004-a822-4ca5bfdd77d2 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:04:23.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3262" for this suite.
Jan 23 15:04:45.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:04:45.631: INFO: namespace pods-3262 deletion completed in 22.166647785s

• [SLOW TEST:32.365 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:04:45.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-e7f5654e-8b25-46f4-8fe9-e834d53a91cf
STEP: Creating a pod to test consume configMaps
Jan 23 15:04:45.827: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-12dc0ebd-e6c0-4635-8e10-f6e968cf5bc9" in namespace "projected-7414" to be "success or failure"
Jan 23 15:04:45.842: INFO: Pod "pod-projected-configmaps-12dc0ebd-e6c0-4635-8e10-f6e968cf5bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.593524ms
Jan 23 15:04:47.855: INFO: Pod "pod-projected-configmaps-12dc0ebd-e6c0-4635-8e10-f6e968cf5bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027784802s
Jan 23 15:04:49.870: INFO: Pod "pod-projected-configmaps-12dc0ebd-e6c0-4635-8e10-f6e968cf5bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042377275s
Jan 23 15:04:51.882: INFO: Pod "pod-projected-configmaps-12dc0ebd-e6c0-4635-8e10-f6e968cf5bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055024827s
Jan 23 15:04:53.896: INFO: Pod "pod-projected-configmaps-12dc0ebd-e6c0-4635-8e10-f6e968cf5bc9": Phase="Running", Reason="", readiness=true. Elapsed: 8.068656616s
Jan 23 15:04:55.908: INFO: Pod "pod-projected-configmaps-12dc0ebd-e6c0-4635-8e10-f6e968cf5bc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080880422s
STEP: Saw pod success
Jan 23 15:04:55.908: INFO: Pod "pod-projected-configmaps-12dc0ebd-e6c0-4635-8e10-f6e968cf5bc9" satisfied condition "success or failure"
Jan 23 15:04:55.915: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-12dc0ebd-e6c0-4635-8e10-f6e968cf5bc9 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 23 15:04:56.000: INFO: Waiting for pod pod-projected-configmaps-12dc0ebd-e6c0-4635-8e10-f6e968cf5bc9 to disappear
Jan 23 15:04:56.051: INFO: Pod pod-projected-configmaps-12dc0ebd-e6c0-4635-8e10-f6e968cf5bc9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:04:56.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7414" for this suite.
Jan 23 15:05:02.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:05:02.277: INFO: namespace projected-7414 deletion completed in 6.217680665s

• [SLOW TEST:16.646 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:05:02.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7391
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 23 15:05:02.477: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 23 15:05:44.791: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7391 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 15:05:44.792: INFO: >>> kubeConfig: /root/.kube/config
I0123 15:05:44.883545       8 log.go:172] (0xc000979810) (0xc001428500) Create stream
I0123 15:05:44.883811       8 log.go:172] (0xc000979810) (0xc001428500) Stream added, broadcasting: 1
I0123 15:05:44.896719       8 log.go:172] (0xc000979810) Reply frame received for 1
I0123 15:05:44.896814       8 log.go:172] (0xc000979810) (0xc000112c80) Create stream
I0123 15:05:44.896852       8 log.go:172] (0xc000979810) (0xc000112c80) Stream added, broadcasting: 3
I0123 15:05:44.899651       8 log.go:172] (0xc000979810) Reply frame received for 3
I0123 15:05:44.899727       8 log.go:172] (0xc000979810) (0xc000b900a0) Create stream
I0123 15:05:44.899738       8 log.go:172] (0xc000979810) (0xc000b900a0) Stream added, broadcasting: 5
I0123 15:05:44.902626       8 log.go:172] (0xc000979810) Reply frame received for 5
I0123 15:05:46.091158       8 log.go:172] (0xc000979810) Data frame received for 3
I0123 15:05:46.091254       8 log.go:172] (0xc000112c80) (3) Data frame handling
I0123 15:05:46.091283       8 log.go:172] (0xc000112c80) (3) Data frame sent
I0123 15:05:46.264387       8 log.go:172] (0xc000979810) Data frame received for 1
I0123 15:05:46.264521       8 log.go:172] (0xc000979810) (0xc000b900a0) Stream removed, broadcasting: 5
I0123 15:05:46.264575       8 log.go:172] (0xc001428500) (1) Data frame handling
I0123 15:05:46.264599       8 log.go:172] (0xc001428500) (1) Data frame sent
I0123 15:05:46.264624       8 log.go:172] (0xc000979810) (0xc000112c80) Stream removed, broadcasting: 3
I0123 15:05:46.264647       8 log.go:172] (0xc000979810) (0xc001428500) Stream removed, broadcasting: 1
I0123 15:05:46.264856       8 log.go:172] (0xc000979810) (0xc001428500) Stream removed, broadcasting: 1
I0123 15:05:46.264870       8 log.go:172] (0xc000979810) (0xc000112c80) Stream removed, broadcasting: 3
I0123 15:05:46.264886       8 log.go:172] (0xc000979810) (0xc000b900a0) Stream removed, broadcasting: 5
Jan 23 15:05:46.265: INFO: Found all expected endpoints: [netserver-0]
I0123 15:05:46.266081       8 log.go:172] (0xc000979810) Go away received
Jan 23 15:05:46.279: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7391 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 23 15:05:46.279: INFO: >>> kubeConfig: /root/.kube/config
I0123 15:05:46.333634       8 log.go:172] (0xc000834dc0) (0xc000b90d20) Create stream
I0123 15:05:46.333689       8 log.go:172] (0xc000834dc0) (0xc000b90d20) Stream added, broadcasting: 1
I0123 15:05:46.340904       8 log.go:172] (0xc000834dc0) Reply frame received for 1
I0123 15:05:46.340930       8 log.go:172] (0xc000834dc0) (0xc000113180) Create stream
I0123 15:05:46.340937       8 log.go:172] (0xc000834dc0) (0xc000113180) Stream added, broadcasting: 3
I0123 15:05:46.344562       8 log.go:172] (0xc000834dc0) Reply frame received for 3
I0123 15:05:46.344674       8 log.go:172] (0xc000834dc0) (0xc0019f8140) Create stream
I0123 15:05:46.344692       8 log.go:172] (0xc000834dc0) (0xc0019f8140) Stream added, broadcasting: 5
I0123 15:05:46.346369       8 log.go:172] (0xc000834dc0) Reply frame received for 5
I0123 15:05:47.472288       8 log.go:172] (0xc000834dc0) Data frame received for 3
I0123 15:05:47.472394       8 log.go:172] (0xc000113180) (3) Data frame handling
I0123 15:05:47.472435       8 log.go:172] (0xc000113180) (3) Data frame sent
I0123 15:05:47.612158       8 log.go:172] (0xc000834dc0) Data frame received for 1
I0123 15:05:47.612350       8 log.go:172] (0xc000b90d20) (1) Data frame handling
I0123 15:05:47.612417       8 log.go:172] (0xc000b90d20) (1) Data frame sent
I0123 15:05:47.612475       8 log.go:172] (0xc000834dc0) (0xc000b90d20) Stream removed, broadcasting: 1
I0123 15:05:47.613107       8 log.go:172] (0xc000834dc0) (0xc000113180) Stream removed, broadcasting: 3
I0123 15:05:47.613172       8 log.go:172] (0xc000834dc0) (0xc0019f8140) Stream removed, broadcasting: 5
I0123 15:05:47.613225       8 log.go:172] (0xc000834dc0) Go away received
I0123 15:05:47.613660       8 log.go:172] (0xc000834dc0) (0xc000b90d20) Stream removed, broadcasting: 1
I0123 15:05:47.614042       8 log.go:172] (0xc000834dc0) (0xc000113180) Stream removed, broadcasting: 3
I0123 15:05:47.614106       8 log.go:172] (0xc000834dc0) (0xc0019f8140) Stream removed, broadcasting: 5
Jan 23 15:05:47.614: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:05:47.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7391" for this suite.
Jan 23 15:06:11.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:06:11.824: INFO: namespace pod-network-test-7391 deletion completed in 24.194110902s

• [SLOW TEST:69.547 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:06:11.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 23 15:06:12.079: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 23 15:06:17.086: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:06:18.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9745" for this suite.
Jan 23 15:06:24.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:06:24.278: INFO: namespace replication-controller-9745 deletion completed in 6.149175566s

• [SLOW TEST:12.453 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:06:24.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 23 15:06:24.408: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.547095ms)
Jan 23 15:06:24.413: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.636057ms)
Jan 23 15:06:24.418: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.819622ms)
Jan 23 15:06:24.428: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.327369ms)
Jan 23 15:06:24.439: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.156259ms)
Jan 23 15:06:24.479: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 39.515228ms)
Jan 23 15:06:24.487: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.62741ms)
Jan 23 15:06:24.497: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.497978ms)
Jan 23 15:06:24.503: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.897635ms)
Jan 23 15:06:24.509: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.797598ms)
Jan 23 15:06:24.517: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.985468ms)
Jan 23 15:06:24.522: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.034881ms)
Jan 23 15:06:24.527: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.117789ms)
Jan 23 15:06:24.532: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.764117ms)
Jan 23 15:06:24.537: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.151757ms)
Jan 23 15:06:24.544: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.441638ms)
Jan 23 15:06:24.556: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.072917ms)
Jan 23 15:06:24.569: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.627654ms)
Jan 23 15:06:24.581: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.592569ms)
Jan 23 15:06:24.588: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.766765ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:06:24.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9924" for this suite.
Jan 23 15:06:30.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:06:30.774: INFO: namespace proxy-9924 deletion completed in 6.181334181s

• [SLOW TEST:6.495 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:06:30.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 23 15:06:30.838: INFO: Waiting up to 5m0s for pod "downward-api-5d50fa4d-d1d6-4134-9aed-5f64f99ca80f" in namespace "downward-api-7556" to be "success or failure"
Jan 23 15:06:30.856: INFO: Pod "downward-api-5d50fa4d-d1d6-4134-9aed-5f64f99ca80f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.643568ms
Jan 23 15:06:33.119: INFO: Pod "downward-api-5d50fa4d-d1d6-4134-9aed-5f64f99ca80f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280729865s
Jan 23 15:06:35.125: INFO: Pod "downward-api-5d50fa4d-d1d6-4134-9aed-5f64f99ca80f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287719151s
Jan 23 15:06:37.137: INFO: Pod "downward-api-5d50fa4d-d1d6-4134-9aed-5f64f99ca80f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299623835s
Jan 23 15:06:39.144: INFO: Pod "downward-api-5d50fa4d-d1d6-4134-9aed-5f64f99ca80f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.305972417s
STEP: Saw pod success
Jan 23 15:06:39.144: INFO: Pod "downward-api-5d50fa4d-d1d6-4134-9aed-5f64f99ca80f" satisfied condition "success or failure"
Jan 23 15:06:39.146: INFO: Trying to get logs from node iruya-node pod downward-api-5d50fa4d-d1d6-4134-9aed-5f64f99ca80f container dapi-container: 
STEP: delete the pod
Jan 23 15:06:39.203: INFO: Waiting for pod downward-api-5d50fa4d-d1d6-4134-9aed-5f64f99ca80f to disappear
Jan 23 15:06:39.213: INFO: Pod downward-api-5d50fa4d-d1d6-4134-9aed-5f64f99ca80f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:06:39.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7556" for this suite.
Jan 23 15:06:45.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:06:45.429: INFO: namespace downward-api-7556 deletion completed in 6.209427627s

• [SLOW TEST:14.654 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:06:45.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 23 15:06:45.606: INFO: Waiting up to 5m0s for pod "downward-api-7baaa513-96b8-4b16-a377-1e2821d7575d" in namespace "downward-api-6197" to be "success or failure"
Jan 23 15:06:45.667: INFO: Pod "downward-api-7baaa513-96b8-4b16-a377-1e2821d7575d": Phase="Pending", Reason="", readiness=false. Elapsed: 60.645209ms
Jan 23 15:06:47.678: INFO: Pod "downward-api-7baaa513-96b8-4b16-a377-1e2821d7575d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072282566s
Jan 23 15:06:49.807: INFO: Pod "downward-api-7baaa513-96b8-4b16-a377-1e2821d7575d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200656288s
Jan 23 15:06:51.816: INFO: Pod "downward-api-7baaa513-96b8-4b16-a377-1e2821d7575d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21010764s
Jan 23 15:06:53.829: INFO: Pod "downward-api-7baaa513-96b8-4b16-a377-1e2821d7575d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222521332s
Jan 23 15:06:55.841: INFO: Pod "downward-api-7baaa513-96b8-4b16-a377-1e2821d7575d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.234581635s
STEP: Saw pod success
Jan 23 15:06:55.841: INFO: Pod "downward-api-7baaa513-96b8-4b16-a377-1e2821d7575d" satisfied condition "success or failure"
Jan 23 15:06:55.846: INFO: Trying to get logs from node iruya-node pod downward-api-7baaa513-96b8-4b16-a377-1e2821d7575d container dapi-container: 
STEP: delete the pod
Jan 23 15:06:55.992: INFO: Waiting for pod downward-api-7baaa513-96b8-4b16-a377-1e2821d7575d to disappear
Jan 23 15:06:56.007: INFO: Pod downward-api-7baaa513-96b8-4b16-a377-1e2821d7575d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:06:56.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6197" for this suite.
Jan 23 15:07:02.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:07:02.247: INFO: namespace downward-api-6197 deletion completed in 6.232587285s

• [SLOW TEST:16.817 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:07:02.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 23 15:07:02.370: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 23 15:07:02.378: INFO: Waiting for terminating namespaces to be deleted...
Jan 23 15:07:02.381: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 23 15:07:02.390: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 23 15:07:02.390: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 23 15:07:02.390: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 23 15:07:02.390: INFO: 	Container weave ready: true, restart count 0
Jan 23 15:07:02.390: INFO: 	Container weave-npc ready: true, restart count 0
Jan 23 15:07:02.390: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 23 15:07:02.398: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 23 15:07:02.398: INFO: 	Container coredns ready: true, restart count 0
Jan 23 15:07:02.398: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 23 15:07:02.398: INFO: 	Container etcd ready: true, restart count 0
Jan 23 15:07:02.398: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 23 15:07:02.398: INFO: 	Container weave ready: true, restart count 0
Jan 23 15:07:02.398: INFO: 	Container weave-npc ready: true, restart count 0
Jan 23 15:07:02.398: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 23 15:07:02.398: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 23 15:07:02.398: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 23 15:07:02.398: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 23 15:07:02.398: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 23 15:07:02.398: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 23 15:07:02.398: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 23 15:07:02.398: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 23 15:07:02.398: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 23 15:07:02.398: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ec8c796b840442], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:07:03.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7473" for this suite.
Jan 23 15:07:09.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:07:09.575: INFO: namespace sched-pred-7473 deletion completed in 6.141581679s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.327 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:07:09.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:07:17.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-726" for this suite.
Jan 23 15:08:09.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:08:09.909: INFO: namespace kubelet-test-726 deletion completed in 52.1860656s

• [SLOW TEST:60.334 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:08:09.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan 23 15:08:10.057: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:08:10.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7763" for this suite.
Jan 23 15:08:16.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:08:16.714: INFO: namespace kubectl-7763 deletion completed in 6.547228474s

• [SLOW TEST:6.805 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:08:16.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0123 15:08:46.928486       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 23 15:08:46.928: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:08:46.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5576" for this suite.
Jan 23 15:08:54.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:08:54.450: INFO: namespace gc-5576 deletion completed in 7.513896104s

• [SLOW TEST:37.736 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:08:54.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 23 15:08:54.581: INFO: Waiting up to 5m0s for pod "pod-c5772ea4-c914-4e67-af9b-f69ded52469f" in namespace "emptydir-271" to be "success or failure"
Jan 23 15:08:54.586: INFO: Pod "pod-c5772ea4-c914-4e67-af9b-f69ded52469f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.442022ms
Jan 23 15:08:56.994: INFO: Pod "pod-c5772ea4-c914-4e67-af9b-f69ded52469f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.412973741s
Jan 23 15:08:59.014: INFO: Pod "pod-c5772ea4-c914-4e67-af9b-f69ded52469f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432761196s
Jan 23 15:09:01.023: INFO: Pod "pod-c5772ea4-c914-4e67-af9b-f69ded52469f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441924177s
Jan 23 15:09:03.033: INFO: Pod "pod-c5772ea4-c914-4e67-af9b-f69ded52469f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.452179268s
Jan 23 15:09:05.041: INFO: Pod "pod-c5772ea4-c914-4e67-af9b-f69ded52469f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.459439659s
STEP: Saw pod success
Jan 23 15:09:05.041: INFO: Pod "pod-c5772ea4-c914-4e67-af9b-f69ded52469f" satisfied condition "success or failure"
Jan 23 15:09:05.044: INFO: Trying to get logs from node iruya-node pod pod-c5772ea4-c914-4e67-af9b-f69ded52469f container test-container: 
STEP: delete the pod
Jan 23 15:09:05.099: INFO: Waiting for pod pod-c5772ea4-c914-4e67-af9b-f69ded52469f to disappear
Jan 23 15:09:05.113: INFO: Pod pod-c5772ea4-c914-4e67-af9b-f69ded52469f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:09:05.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-271" for this suite.
Jan 23 15:09:11.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:09:11.307: INFO: namespace emptydir-271 deletion completed in 6.184986206s

• [SLOW TEST:16.856 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:09:11.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 23 15:09:11.436: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9170,SelfLink:/api/v1/namespaces/watch-9170/configmaps/e2e-watch-test-watch-closed,UID:a758c988-5805-4636-8306-31ef36276e87,ResourceVersion:21576422,Generation:0,CreationTimestamp:2020-01-23 15:09:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 23 15:09:11.436: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9170,SelfLink:/api/v1/namespaces/watch-9170/configmaps/e2e-watch-test-watch-closed,UID:a758c988-5805-4636-8306-31ef36276e87,ResourceVersion:21576423,Generation:0,CreationTimestamp:2020-01-23 15:09:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 23 15:09:11.574: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9170,SelfLink:/api/v1/namespaces/watch-9170/configmaps/e2e-watch-test-watch-closed,UID:a758c988-5805-4636-8306-31ef36276e87,ResourceVersion:21576424,Generation:0,CreationTimestamp:2020-01-23 15:09:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 23 15:09:11.574: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9170,SelfLink:/api/v1/namespaces/watch-9170/configmaps/e2e-watch-test-watch-closed,UID:a758c988-5805-4636-8306-31ef36276e87,ResourceVersion:21576425,Generation:0,CreationTimestamp:2020-01-23 15:09:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:09:11.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9170" for this suite.
Jan 23 15:09:17.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:09:17.731: INFO: namespace watch-9170 deletion completed in 6.150721905s

• [SLOW TEST:6.424 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:09:17.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:09:48.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9973" for this suite.
Jan 23 15:09:54.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:09:54.360: INFO: namespace namespaces-9973 deletion completed in 6.114138427s
STEP: Destroying namespace "nsdeletetest-7697" for this suite.
Jan 23 15:09:54.364: INFO: Namespace nsdeletetest-7697 was already deleted
STEP: Destroying namespace "nsdeletetest-6484" for this suite.
Jan 23 15:10:00.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:10:00.569: INFO: namespace nsdeletetest-6484 deletion completed in 6.204664417s

• [SLOW TEST:42.837 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:10:00.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan 23 15:10:00.670: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6084" to be "success or failure"
Jan 23 15:10:00.701: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 31.1854ms
Jan 23 15:10:02.708: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038413611s
Jan 23 15:10:04.722: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052657393s
Jan 23 15:10:06.734: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063772098s
Jan 23 15:10:08.771: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101606581s
Jan 23 15:10:10.778: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108199785s
STEP: Saw pod success
Jan 23 15:10:10.778: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 23 15:10:10.781: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 23 15:10:10.857: INFO: Waiting for pod pod-host-path-test to disappear
Jan 23 15:10:10.869: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:10:10.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6084" for this suite.
Jan 23 15:10:16.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:10:17.020: INFO: namespace hostpath-6084 deletion completed in 6.140218813s

• [SLOW TEST:16.450 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:10:17.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 23 15:10:25.224: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:10:25.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1942" for this suite.
Jan 23 15:10:31.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:10:31.507: INFO: namespace container-runtime-1942 deletion completed in 6.172344773s

• [SLOW TEST:14.487 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 23 15:10:31.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 23 15:10:38.720: INFO: 0 pods remaining
Jan 23 15:10:38.720: INFO: 0 pods has nil DeletionTimestamp
Jan 23 15:10:38.720: INFO: 
STEP: Gathering metrics
W0123 15:10:39.528281       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 23 15:10:39.528: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 23 15:10:39.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7470" for this suite.
Jan 23 15:10:49.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 23 15:10:49.712: INFO: namespace gc-7470 deletion completed in 10.17769233s

• [SLOW TEST:18.205 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSJan 23 15:10:49.713: INFO: Running AfterSuite actions on all nodes
Jan 23 15:10:49.713: INFO: Running AfterSuite actions on node 1
Jan 23 15:10:49.713: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8076.299 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS